1:mod:`multiprocessing` --- Process-based parallelism
2====================================================
3
4.. module:: multiprocessing
5   :synopsis: Process-based parallelism.
6
7**Source code:** :source:`Lib/multiprocessing/`
8
9--------------
10
11.. include:: ../includes/wasm-notavail.rst
12
13Introduction
14------------
15
16:mod:`multiprocessing` is a package that supports spawning processes using an
17API similar to the :mod:`threading` module.  The :mod:`multiprocessing` package
18offers both local and remote concurrency, effectively side-stepping the
19:term:`Global Interpreter Lock <global interpreter lock>` by using
20subprocesses instead of threads.  Due
21to this, the :mod:`multiprocessing` module allows the programmer to fully
22leverage multiple processors on a given machine.  It runs on both Unix and
23Windows.
24
25The :mod:`multiprocessing` module also introduces APIs which do not have
26analogs in the :mod:`threading` module.  A prime example of this is the
27:class:`~multiprocessing.pool.Pool` object which offers a convenient means of
28parallelizing the execution of a function across multiple input values,
29distributing the input data across processes (data parallelism).  The following
30example demonstrates the common practice of defining such functions in a module
31so that child processes can successfully import that module.  This basic example
32of data parallelism using :class:`~multiprocessing.pool.Pool`, ::
33
34   from multiprocessing import Pool
35
36   def f(x):
37       return x*x
38
39   if __name__ == '__main__':
40       with Pool(5) as p:
41           print(p.map(f, [1, 2, 3]))
42
43will print to standard output ::
44
45   [1, 4, 9]
46
47
48.. seealso::
49
50   :class:`concurrent.futures.ProcessPoolExecutor` offers a higher level interface
51   to push tasks to a background process without blocking execution of the
52   calling process. Compared to using the :class:`~multiprocessing.pool.Pool`
53   interface directly, the :mod:`concurrent.futures` API more readily allows
54   the submission of work to the underlying process pool to be separated from
55   waiting for the results.
56
57
58The :class:`Process` class
59~~~~~~~~~~~~~~~~~~~~~~~~~~
60
61In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
62object and then calling its :meth:`~Process.start` method.  :class:`Process`
63follows the API of :class:`threading.Thread`.  A trivial example of a
64multiprocess program is ::
65
66   from multiprocessing import Process
67
68   def f(name):
69       print('hello', name)
70
71   if __name__ == '__main__':
72       p = Process(target=f, args=('bob',))
73       p.start()
74       p.join()
75
76To show the individual process IDs involved, here is an expanded example::
77
78    from multiprocessing import Process
79    import os
80
81    def info(title):
82        print(title)
83        print('module name:', __name__)
84        print('parent process:', os.getppid())
85        print('process id:', os.getpid())
86
87    def f(name):
88        info('function f')
89        print('hello', name)
90
91    if __name__ == '__main__':
92        info('main line')
93        p = Process(target=f, args=('bob',))
94        p.start()
95        p.join()
96
97For an explanation of why the ``if __name__ == '__main__'`` part is
98necessary, see :ref:`multiprocessing-programming`.
99
100
101
102Contexts and start methods
103~~~~~~~~~~~~~~~~~~~~~~~~~~
104
105.. _multiprocessing-start-methods:
106
107Depending on the platform, :mod:`multiprocessing` supports three ways
108to start a process.  These *start methods* are
109
110  *spawn*
111    The parent process starts a fresh Python interpreter process.  The
112    child process will only inherit those resources necessary to run
113    the process object's :meth:`~Process.run` method.  In particular,
114    unnecessary file descriptors and handles from the parent process
115    will not be inherited.  Starting a process using this method is
116    rather slow compared to using *fork* or *forkserver*.
117
118    Available on Unix and Windows.  The default on Windows and macOS.
119
120  *fork*
121    The parent process uses :func:`os.fork` to fork the Python
122    interpreter.  The child process, when it begins, is effectively
123    identical to the parent process.  All resources of the parent are
124    inherited by the child process.  Note that safely forking a
125    multithreaded process is problematic.
126
127    Available on Unix only.  The default on Unix.
128
129  *forkserver*
130    When the program starts and selects the *forkserver* start method,
131    a server process is started.  From then on, whenever a new process
132    is needed, the parent process connects to the server and requests
133    that it fork a new process.  The fork server process is single
134    threaded so it is safe for it to use :func:`os.fork`.  No
135    unnecessary resources are inherited.
136
137    Available on Unix platforms which support passing file descriptors
138    over Unix pipes.
139
140.. versionchanged:: 3.8
141
142   On macOS, the *spawn* start method is now the default.  The *fork* start
143   method should be considered unsafe as it can lead to crashes of the
144   subprocess. See :issue:`33725`.
145
146.. versionchanged:: 3.4
147   *spawn* added on all Unix platforms, and *forkserver* added for
148   some Unix platforms.
149   Child processes no longer inherit all of the parents inheritable
150   handles on Windows.
151
152On Unix using the *spawn* or *forkserver* start methods will also
153start a *resource tracker* process which tracks the unlinked named
154system resources (such as named semaphores or
155:class:`~multiprocessing.shared_memory.SharedMemory` objects) created
156by processes of the program.  When all processes
157have exited the resource tracker unlinks any remaining tracked object.
158Usually there should be none, but if a process was killed by a signal
159there may be some "leaked" resources.  (Neither leaked semaphores nor shared
160memory segments will be automatically unlinked until the next reboot. This is
161problematic for both objects because the system allows only a limited number of
162named semaphores, and shared memory segments occupy some space in the main
163memory.)
164
165To select a start method you use the :func:`set_start_method` in
166the ``if __name__ == '__main__'`` clause of the main module.  For
167example::
168
169       import multiprocessing as mp
170
171       def foo(q):
172           q.put('hello')
173
174       if __name__ == '__main__':
175           mp.set_start_method('spawn')
176           q = mp.Queue()
177           p = mp.Process(target=foo, args=(q,))
178           p.start()
179           print(q.get())
180           p.join()
181
182:func:`set_start_method` should not be used more than once in the
183program.
184
185Alternatively, you can use :func:`get_context` to obtain a context
186object.  Context objects have the same API as the multiprocessing
187module, and allow one to use multiple start methods in the same
188program. ::
189
190       import multiprocessing as mp
191
192       def foo(q):
193           q.put('hello')
194
195       if __name__ == '__main__':
196           ctx = mp.get_context('spawn')
197           q = ctx.Queue()
198           p = ctx.Process(target=foo, args=(q,))
199           p.start()
200           print(q.get())
201           p.join()
202
203Note that objects related to one context may not be compatible with
204processes for a different context.  In particular, locks created using
205the *fork* context cannot be passed to processes started using the
206*spawn* or *forkserver* start methods.
207
208A library which wants to use a particular start method should probably
209use :func:`get_context` to avoid interfering with the choice of the
210library user.
211
212.. warning::
213
214   The ``'spawn'`` and ``'forkserver'`` start methods cannot currently
215   be used with "frozen" executables (i.e., binaries produced by
216   packages like **PyInstaller** and **cx_Freeze**) on Unix.
217   The ``'fork'`` start method does work.
218
219
220Exchanging objects between processes
221~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
222
223:mod:`multiprocessing` supports two types of communication channel between
224processes:
225
226**Queues**
227
228   The :class:`Queue` class is a near clone of :class:`queue.Queue`.  For
229   example::
230
231      from multiprocessing import Process, Queue
232
233      def f(q):
234          q.put([42, None, 'hello'])
235
236      if __name__ == '__main__':
237          q = Queue()
238          p = Process(target=f, args=(q,))
239          p.start()
240          print(q.get())    # prints "[42, None, 'hello']"
241          p.join()
242
243   Queues are thread and process safe.
244
245**Pipes**
246
247   The :func:`Pipe` function returns a pair of connection objects connected by a
248   pipe which by default is duplex (two-way).  For example::
249
250      from multiprocessing import Process, Pipe
251
252      def f(conn):
253          conn.send([42, None, 'hello'])
254          conn.close()
255
256      if __name__ == '__main__':
257          parent_conn, child_conn = Pipe()
258          p = Process(target=f, args=(child_conn,))
259          p.start()
260          print(parent_conn.recv())   # prints "[42, None, 'hello']"
261          p.join()
262
263   The two connection objects returned by :func:`Pipe` represent the two ends of
264   the pipe.  Each connection object has :meth:`~Connection.send` and
265   :meth:`~Connection.recv` methods (among others).  Note that data in a pipe
266   may become corrupted if two processes (or threads) try to read from or write
267   to the *same* end of the pipe at the same time.  Of course there is no risk
268   of corruption from processes using different ends of the pipe at the same
269   time.
270
271
272Synchronization between processes
273~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
274
275:mod:`multiprocessing` contains equivalents of all the synchronization
276primitives from :mod:`threading`.  For instance one can use a lock to ensure
277that only one process prints to standard output at a time::
278
279   from multiprocessing import Process, Lock
280
281   def f(l, i):
282       l.acquire()
283       try:
284           print('hello world', i)
285       finally:
286           l.release()
287
288   if __name__ == '__main__':
289       lock = Lock()
290
291       for num in range(10):
292           Process(target=f, args=(lock, num)).start()
293
294Without using the lock output from the different processes is liable to get all
295mixed up.
296
297
298Sharing state between processes
299~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
300
301As mentioned above, when doing concurrent programming it is usually best to
302avoid using shared state as far as possible.  This is particularly true when
303using multiple processes.
304
305However, if you really do need to use some shared data then
306:mod:`multiprocessing` provides a couple of ways of doing so.
307
308**Shared memory**
309
310   Data can be stored in a shared memory map using :class:`Value` or
311   :class:`Array`.  For example, the following code ::
312
313      from multiprocessing import Process, Value, Array
314
315      def f(n, a):
316          n.value = 3.1415927
317          for i in range(len(a)):
318              a[i] = -a[i]
319
320      if __name__ == '__main__':
321          num = Value('d', 0.0)
322          arr = Array('i', range(10))
323
324          p = Process(target=f, args=(num, arr))
325          p.start()
326          p.join()
327
328          print(num.value)
329          print(arr[:])
330
331   will print ::
332
333      3.1415927
334      [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
335
336   The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
337   typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
338   double precision float and ``'i'`` indicates a signed integer.  These shared
339   objects will be process and thread-safe.
340
341   For more flexibility in using shared memory one can use the
342   :mod:`multiprocessing.sharedctypes` module which supports the creation of
343   arbitrary ctypes objects allocated from shared memory.
344
345**Server process**
346
347   A manager object returned by :func:`Manager` controls a server process which
348   holds Python objects and allows other processes to manipulate them using
349   proxies.
350
351   A manager returned by :func:`Manager` will support types
352   :class:`list`, :class:`dict`, :class:`~managers.Namespace`, :class:`Lock`,
353   :class:`RLock`, :class:`Semaphore`, :class:`BoundedSemaphore`,
354   :class:`Condition`, :class:`Event`, :class:`Barrier`,
355   :class:`Queue`, :class:`Value` and :class:`Array`.  For example, ::
356
357      from multiprocessing import Process, Manager
358
359      def f(d, l):
360          d[1] = '1'
361          d['2'] = 2
362          d[0.25] = None
363          l.reverse()
364
365      if __name__ == '__main__':
366          with Manager() as manager:
367              d = manager.dict()
368              l = manager.list(range(10))
369
370              p = Process(target=f, args=(d, l))
371              p.start()
372              p.join()
373
374              print(d)
375              print(l)
376
377   will print ::
378
379       {0.25: None, 1: '1', '2': 2}
380       [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
381
382   Server process managers are more flexible than using shared memory objects
383   because they can be made to support arbitrary object types.  Also, a single
384   manager can be shared by processes on different computers over a network.
385   They are, however, slower than using shared memory.
386
387
388Using a pool of workers
389~~~~~~~~~~~~~~~~~~~~~~~
390
391The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
392processes.  It has methods which allows tasks to be offloaded to the worker
393processes in a few different ways.
394
395For example::
396
397   from multiprocessing import Pool, TimeoutError
398   import time
399   import os
400
401   def f(x):
402       return x*x
403
404   if __name__ == '__main__':
405       # start 4 worker processes
406       with Pool(processes=4) as pool:
407
408           # print "[0, 1, 4,..., 81]"
409           print(pool.map(f, range(10)))
410
411           # print same numbers in arbitrary order
412           for i in pool.imap_unordered(f, range(10)):
413               print(i)
414
415           # evaluate "f(20)" asynchronously
416           res = pool.apply_async(f, (20,))      # runs in *only* one process
417           print(res.get(timeout=1))             # prints "400"
418
419           # evaluate "os.getpid()" asynchronously
420           res = pool.apply_async(os.getpid, ()) # runs in *only* one process
421           print(res.get(timeout=1))             # prints the PID of that process
422
423           # launching multiple evaluations asynchronously *may* use more processes
424           multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
425           print([res.get(timeout=1) for res in multiple_results])
426
427           # make a single worker sleep for 10 seconds
428           res = pool.apply_async(time.sleep, (10,))
429           try:
430               print(res.get(timeout=1))
431           except TimeoutError:
432               print("We lacked patience and got a multiprocessing.TimeoutError")
433
434           print("For the moment, the pool remains available for more work")
435
436       # exiting the 'with'-block has stopped the pool
437       print("Now the pool is closed and no longer available")
438
439Note that the methods of a pool should only ever be used by the
440process which created it.
441
442.. note::
443
444   Functionality within this package requires that the ``__main__`` module be
445   importable by the children. This is covered in :ref:`multiprocessing-programming`
446   however it is worth pointing out here. This means that some examples, such
447   as the :class:`multiprocessing.pool.Pool` examples will not work in the
448   interactive interpreter. For example::
449
450      >>> from multiprocessing import Pool
451      >>> p = Pool(5)
452      >>> def f(x):
453      ...     return x*x
454      ...
455      >>> with p:
456      ...   p.map(f, [1,2,3])
457      Process PoolWorker-1:
458      Process PoolWorker-2:
459      Process PoolWorker-3:
460      Traceback (most recent call last):
461      Traceback (most recent call last):
462      Traceback (most recent call last):
463      AttributeError: 'module' object has no attribute 'f'
464      AttributeError: 'module' object has no attribute 'f'
465      AttributeError: 'module' object has no attribute 'f'
466
467   (If you try this it will actually output three full tracebacks
468   interleaved in a semi-random fashion, and then you may have to
469   stop the parent process somehow.)
470
471
472Reference
473---------
474
475The :mod:`multiprocessing` package mostly replicates the API of the
476:mod:`threading` module.
477
478
479:class:`Process` and exceptions
480~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
481
482.. class:: Process(group=None, target=None, name=None, args=(), kwargs={}, \
483                   *, daemon=None)
484
485   Process objects represent activity that is run in a separate process. The
486   :class:`Process` class has equivalents of all the methods of
487   :class:`threading.Thread`.
488
489   The constructor should always be called with keyword arguments. *group*
490   should always be ``None``; it exists solely for compatibility with
491   :class:`threading.Thread`.  *target* is the callable object to be invoked by
492   the :meth:`run()` method.  It defaults to ``None``, meaning nothing is
493   called. *name* is the process name (see :attr:`name` for more details).
494   *args* is the argument tuple for the target invocation.  *kwargs* is a
495   dictionary of keyword arguments for the target invocation.  If provided,
496   the keyword-only *daemon* argument sets the process :attr:`daemon` flag
497   to ``True`` or ``False``.  If ``None`` (the default), this flag will be
498   inherited from the creating process.
499
500   By default, no arguments are passed to *target*. The *args* argument,
501   which defaults to ``()``, can be used to specify a list or tuple of the arguments
502   to pass to *target*.
503
504   If a subclass overrides the constructor, it must make sure it invokes the
505   base class constructor (:meth:`Process.__init__`) before doing anything else
506   to the process.
507
508   .. versionchanged:: 3.3
509      Added the *daemon* argument.
510
511   .. method:: run()
512
513      Method representing the process's activity.
514
515      You may override this method in a subclass.  The standard :meth:`run`
516      method invokes the callable object passed to the object's constructor as
517      the target argument, if any, with sequential and keyword arguments taken
518      from the *args* and *kwargs* arguments, respectively.
519
520      Using a list or tuple as the *args* argument passed to :class:`Process`
521      achieves the same effect.
522
523      Example::
524
525         >>> from multiprocessing import Process
526         >>> p = Process(target=print, args=[1])
527         >>> p.run()
528         1
529         >>> p = Process(target=print, args=(1,))
530         >>> p.run()
531         1
532
533   .. method:: start()
534
535      Start the process's activity.
536
537      This must be called at most once per process object.  It arranges for the
538      object's :meth:`run` method to be invoked in a separate process.
539
540   .. method:: join([timeout])
541
542      If the optional argument *timeout* is ``None`` (the default), the method
543      blocks until the process whose :meth:`join` method is called terminates.
544      If *timeout* is a positive number, it blocks at most *timeout* seconds.
545      Note that the method returns ``None`` if its process terminates or if the
546      method times out.  Check the process's :attr:`exitcode` to determine if
547      it terminated.
548
549      A process can be joined many times.
550
551      A process cannot join itself because this would cause a deadlock.  It is
552      an error to attempt to join a process before it has been started.
553
554   .. attribute:: name
555
556      The process's name.  The name is a string used for identification purposes
557      only.  It has no semantics.  Multiple processes may be given the same
558      name.
559
560      The initial name is set by the constructor.  If no explicit name is
561      provided to the constructor, a name of the form
562      'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' is constructed, where
563      each N\ :sub:`k` is the N-th child of its parent.
564
565   .. method:: is_alive
566
567      Return whether the process is alive.
568
569      Roughly, a process object is alive from the moment the :meth:`start`
570      method returns until the child process terminates.
571
572   .. attribute:: daemon
573
574      The process's daemon flag, a Boolean value.  This must be set before
575      :meth:`start` is called.
576
577      The initial value is inherited from the creating process.
578
579      When a process exits, it attempts to terminate all of its daemonic child
580      processes.
581
582      Note that a daemonic process is not allowed to create child processes.
583      Otherwise a daemonic process would leave its children orphaned if it gets
584      terminated when its parent process exits. Additionally, these are **not**
585      Unix daemons or services, they are normal processes that will be
586      terminated (and not joined) if non-daemonic processes have exited.
587
588   In addition to the  :class:`threading.Thread` API, :class:`Process` objects
589   also support the following attributes and methods:
590
591   .. attribute:: pid
592
593      Return the process ID.  Before the process is spawned, this will be
594      ``None``.
595
596   .. attribute:: exitcode
597
598      The child's exit code.  This will be ``None`` if the process has not yet
599      terminated.
600
601      If the child's :meth:`run` method returned normally, the exit code
602      will be 0.  If it terminated via :func:`sys.exit` with an integer
603      argument *N*, the exit code will be *N*.
604
605      If the child terminated due to an exception not caught within
606      :meth:`run`, the exit code will be 1.  If it was terminated by
607      signal *N*, the exit code will be the negative value *-N*.
608
609   .. attribute:: authkey
610
611      The process's authentication key (a byte string).
612
613      When :mod:`multiprocessing` is initialized the main process is assigned a
614      random string using :func:`os.urandom`.
615
616      When a :class:`Process` object is created, it will inherit the
617      authentication key of its parent process, although this may be changed by
618      setting :attr:`authkey` to another byte string.
619
620      See :ref:`multiprocessing-auth-keys`.
621
622   .. attribute:: sentinel
623
624      A numeric handle of a system object which will become "ready" when
625      the process ends.
626
627      You can use this value if you want to wait on several events at
628      once using :func:`multiprocessing.connection.wait`.  Otherwise
629      calling :meth:`join()` is simpler.
630
631      On Windows, this is an OS handle usable with the ``WaitForSingleObject``
632      and ``WaitForMultipleObjects`` family of API calls.  On Unix, this is
633      a file descriptor usable with primitives from the :mod:`select` module.
634
635      .. versionadded:: 3.3
636
637   .. method:: terminate()
638
639      Terminate the process.  On Unix this is done using the ``SIGTERM`` signal;
640      on Windows :c:func:`TerminateProcess` is used.  Note that exit handlers and
641      finally clauses, etc., will not be executed.
642
643      Note that descendant processes of the process will *not* be terminated --
644      they will simply become orphaned.
645
646      .. warning::
647
648         If this method is used when the associated process is using a pipe or
649         queue then the pipe or queue is liable to become corrupted and may
650         become unusable by other process.  Similarly, if the process has
651         acquired a lock or semaphore etc. then terminating it is liable to
652         cause other processes to deadlock.
653
654   .. method:: kill()
655
656      Same as :meth:`terminate()` but using the ``SIGKILL`` signal on Unix.
657
658      .. versionadded:: 3.7
659
660   .. method:: close()
661
662      Close the :class:`Process` object, releasing all resources associated
663      with it.  :exc:`ValueError` is raised if the underlying process
664      is still running.  Once :meth:`close` returns successfully, most
665      other methods and attributes of the :class:`Process` object will
666      raise :exc:`ValueError`.
667
668      .. versionadded:: 3.7
669
670   Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
671   :meth:`terminate` and :attr:`exitcode` methods should only be called by
672   the process that created the process object.
673
674   Example usage of some of the methods of :class:`Process`:
675
676   .. doctest::
677
678       >>> import multiprocessing, time, signal
679       >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
680       >>> print(p, p.is_alive())
681       <Process ... initial> False
682       >>> p.start()
683       >>> print(p, p.is_alive())
684       <Process ... started> True
685       >>> p.terminate()
686       >>> time.sleep(0.1)
687       >>> print(p, p.is_alive())
688       <Process ... stopped exitcode=-SIGTERM> False
689       >>> p.exitcode == -signal.SIGTERM
690       True
691
692.. exception:: ProcessError
693
694   The base class of all :mod:`multiprocessing` exceptions.
695
696.. exception:: BufferTooShort
697
698   Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
699   buffer object is too small for the message read.
700
701   If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
702   the message as a byte string.
703
704.. exception:: AuthenticationError
705
706   Raised when there is an authentication error.
707
708.. exception:: TimeoutError
709
710   Raised by methods with a timeout when the timeout expires.
711
712Pipes and Queues
713~~~~~~~~~~~~~~~~
714
715When using multiple processes, one generally uses message passing for
716communication between processes and avoids having to use any synchronization
717primitives like locks.
718
719For passing messages one can use :func:`Pipe` (for a connection between two
720processes) or a queue (which allows multiple producers and consumers).
721
722The :class:`Queue`, :class:`SimpleQueue` and :class:`JoinableQueue` types
723are multi-producer, multi-consumer :abbr:`FIFO (first-in, first-out)`
724queues modelled on the :class:`queue.Queue` class in the
725standard library.  They differ in that :class:`Queue` lacks the
726:meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join` methods introduced
727into Python 2.5's :class:`queue.Queue` class.
728
729If you use :class:`JoinableQueue` then you **must** call
730:meth:`JoinableQueue.task_done` for each task removed from the queue or else the
731semaphore used to count the number of unfinished tasks may eventually overflow,
732raising an exception.
733
734Note that one can also create a shared queue by using a manager object -- see
735:ref:`multiprocessing-managers`.
736
737.. note::
738
739   :mod:`multiprocessing` uses the usual :exc:`queue.Empty` and
740   :exc:`queue.Full` exceptions to signal a timeout.  They are not available in
741   the :mod:`multiprocessing` namespace so you need to import them from
742   :mod:`queue`.
743
744.. note::
745
746   When an object is put on a queue, the object is pickled and a
747   background thread later flushes the pickled data to an underlying
748   pipe.  This has some consequences which are a little surprising,
749   but should not cause any practical difficulties -- if they really
750   bother you then you can instead use a queue created with a
751   :ref:`manager <multiprocessing-managers>`.
752
753   (1) After putting an object on an empty queue there may be an
754       infinitesimal delay before the queue's :meth:`~Queue.empty`
755       method returns :const:`False` and :meth:`~Queue.get_nowait` can
756       return without raising :exc:`queue.Empty`.
757
758   (2) If multiple processes are enqueuing objects, it is possible for
759       the objects to be received at the other end out-of-order.
760       However, objects enqueued by the same process will always be in
761       the expected order with respect to each other.
762
763.. warning::
764
765   If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
766   while it is trying to use a :class:`Queue`, then the data in the queue is
767   likely to become corrupted.  This may cause any other process to get an
768   exception when it tries to use the queue later on.
769
770.. warning::
771
772   As mentioned above, if a child process has put items on a queue (and it has
773   not used :meth:`JoinableQueue.cancel_join_thread
774   <multiprocessing.Queue.cancel_join_thread>`), then that process will
775   not terminate until all buffered items have been flushed to the pipe.
776
777   This means that if you try joining that process you may get a deadlock unless
778   you are sure that all items which have been put on the queue have been
779   consumed.  Similarly, if the child process is non-daemonic then the parent
780   process may hang on exit when it tries to join all its non-daemonic children.
781
782   Note that a queue created using a manager does not have this issue.  See
783   :ref:`multiprocessing-programming`.
784
785For an example of the usage of queues for interprocess communication see
786:ref:`multiprocessing-examples`.
787
788
789.. function:: Pipe([duplex])
790
791   Returns a pair ``(conn1, conn2)`` of
792   :class:`~multiprocessing.connection.Connection` objects representing the
793   ends of a pipe.
794
795   If *duplex* is ``True`` (the default) then the pipe is bidirectional.  If
796   *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
797   used for receiving messages and ``conn2`` can only be used for sending
798   messages.
799
800
801.. class:: Queue([maxsize])
802
803   Returns a process shared queue implemented using a pipe and a few
804   locks/semaphores.  When a process first puts an item on the queue a feeder
805   thread is started which transfers objects from a buffer into the pipe.
806
807   The usual :exc:`queue.Empty` and :exc:`queue.Full` exceptions from the
808   standard library's :mod:`queue` module are raised to signal timeouts.
809
810   :class:`Queue` implements all the methods of :class:`queue.Queue` except for
811   :meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join`.
812
813   .. method:: qsize()
814
815      Return the approximate size of the queue.  Because of
816      multithreading/multiprocessing semantics, this number is not reliable.
817
818      Note that this may raise :exc:`NotImplementedError` on Unix platforms like
819      macOS where ``sem_getvalue()`` is not implemented.
820
821   .. method:: empty()
822
823      Return ``True`` if the queue is empty, ``False`` otherwise.  Because of
824      multithreading/multiprocessing semantics, this is not reliable.
825
826   .. method:: full()
827
828      Return ``True`` if the queue is full, ``False`` otherwise.  Because of
829      multithreading/multiprocessing semantics, this is not reliable.
830
831   .. method:: put(obj[, block[, timeout]])
832
833      Put obj into the queue.  If the optional argument *block* is ``True``
834      (the default) and *timeout* is ``None`` (the default), block if necessary until
835      a free slot is available.  If *timeout* is a positive number, it blocks at
836      most *timeout* seconds and raises the :exc:`queue.Full` exception if no
837      free slot was available within that time.  Otherwise (*block* is
838      ``False``), put an item on the queue if a free slot is immediately
839      available, else raise the :exc:`queue.Full` exception (*timeout* is
840      ignored in that case).
841
842      .. versionchanged:: 3.8
843         If the queue is closed, :exc:`ValueError` is raised instead of
844         :exc:`AssertionError`.
845
846   .. method:: put_nowait(obj)
847
848      Equivalent to ``put(obj, False)``.
849
850   .. method:: get([block[, timeout]])
851
852      Remove and return an item from the queue.  If optional args *block* is
853      ``True`` (the default) and *timeout* is ``None`` (the default), block if
854      necessary until an item is available.  If *timeout* is a positive number,
855      it blocks at most *timeout* seconds and raises the :exc:`queue.Empty`
856      exception if no item was available within that time.  Otherwise (block is
857      ``False``), return an item if one is immediately available, else raise the
858      :exc:`queue.Empty` exception (*timeout* is ignored in that case).
859
860      .. versionchanged:: 3.8
861         If the queue is closed, :exc:`ValueError` is raised instead of
862         :exc:`OSError`.
863
864   .. method:: get_nowait()
865
866      Equivalent to ``get(False)``.
867
868   :class:`multiprocessing.Queue` has a few additional methods not found in
869   :class:`queue.Queue`.  These methods are usually unnecessary for most
870   code:
871
872   .. method:: close()
873
874      Indicate that no more data will be put on this queue by the current
875      process.  The background thread will quit once it has flushed all buffered
876      data to the pipe.  This is called automatically when the queue is garbage
877      collected.
878
879   .. method:: join_thread()
880
881      Join the background thread.  This can only be used after :meth:`close` has
882      been called.  It blocks until the background thread exits, ensuring that
883      all data in the buffer has been flushed to the pipe.
884
885      By default if a process is not the creator of the queue then on exit it
886      will attempt to join the queue's background thread.  The process can call
887      :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
888
889   .. method:: cancel_join_thread()
890
891      Prevent :meth:`join_thread` from blocking.  In particular, this prevents
892      the background thread from being joined automatically when the process
893      exits -- see :meth:`join_thread`.
894
895      A better name for this method might be
896      ``allow_exit_without_flush()``.  It is likely to cause enqueued
897      data to be lost, and you almost certainly will not need to use it.
898      It is really only there if you need the current process to exit
899      immediately without waiting to flush enqueued data to the
900      underlying pipe, and you don't care about lost data.
901
902   .. note::
903
904      This class's functionality requires a functioning shared semaphore
905      implementation on the host operating system. Without one, the
906      functionality in this class will be disabled, and attempts to
907      instantiate a :class:`Queue` will result in an :exc:`ImportError`. See
908      :issue:`3770` for additional information.  The same holds true for any
909      of the specialized queue types listed below.
910
911.. class:: SimpleQueue()
912
913   It is a simplified :class:`Queue` type, very close to a locked :class:`Pipe`.
914
915   .. method:: close()
916
917      Close the queue: release internal resources.
918
919      A queue must not be used anymore after it is closed. For example,
920      :meth:`get`, :meth:`put` and :meth:`empty` methods must no longer be
921      called.
922
923      .. versionadded:: 3.9
924
925   .. method:: empty()
926
927      Return ``True`` if the queue is empty, ``False`` otherwise.
928
929   .. method:: get()
930
931      Remove and return an item from the queue.
932
933   .. method:: put(item)
934
935      Put *item* into the queue.
936
937
938.. class:: JoinableQueue([maxsize])
939
940   :class:`JoinableQueue`, a :class:`Queue` subclass, is a queue which
941   additionally has :meth:`task_done` and :meth:`join` methods.
942
943   .. method:: task_done()
944
945      Indicate that a formerly enqueued task is complete. Used by queue
946      consumers.  For each :meth:`~Queue.get` used to fetch a task, a subsequent
947      call to :meth:`task_done` tells the queue that the processing on the task
948      is complete.
949
950      If a :meth:`~queue.Queue.join` is currently blocking, it will resume when all
951      items have been processed (meaning that a :meth:`task_done` call was
952      received for every item that had been :meth:`~Queue.put` into the queue).
953
954      Raises a :exc:`ValueError` if called more times than there were items
955      placed in the queue.
956
957
958   .. method:: join()
959
960      Block until all items in the queue have been gotten and processed.
961
962      The count of unfinished tasks goes up whenever an item is added to the
963      queue.  The count goes down whenever a consumer calls
964      :meth:`task_done` to indicate that the item was retrieved and all work on
965      it is complete.  When the count of unfinished tasks drops to zero,
966      :meth:`~queue.Queue.join` unblocks.
967
968
969Miscellaneous
970~~~~~~~~~~~~~
971
972.. function:: active_children()
973
974   Return list of all live children of the current process.
975
976   Calling this has the side effect of "joining" any processes which have
977   already finished.
978
979.. function:: cpu_count()
980
981   Return the number of CPUs in the system.
982
983   This number is not equivalent to the number of CPUs the current process can
984   use.  The number of usable CPUs can be obtained with
985   ``len(os.sched_getaffinity(0))``
986
987   When the number of CPUs cannot be determined a :exc:`NotImplementedError`
988   is raised.
989
990   .. seealso::
991      :func:`os.cpu_count`
992
993.. function:: current_process()
994
995   Return the :class:`Process` object corresponding to the current process.
996
997   An analogue of :func:`threading.current_thread`.
998
999.. function:: parent_process()
1000
1001   Return the :class:`Process` object corresponding to the parent process of
1002   the :func:`current_process`. For the main process, ``parent_process`` will
1003   be ``None``.
1004
1005   .. versionadded:: 3.8
1006
1007.. function:: freeze_support()
1008
1009   Add support for when a program which uses :mod:`multiprocessing` has been
1010   frozen to produce a Windows executable.  (Has been tested with **py2exe**,
1011   **PyInstaller** and **cx_Freeze**.)
1012
1013   One needs to call this function straight after the ``if __name__ ==
1014   '__main__'`` line of the main module.  For example::
1015
1016      from multiprocessing import Process, freeze_support
1017
1018      def f():
1019          print('hello world!')
1020
1021      if __name__ == '__main__':
1022          freeze_support()
1023          Process(target=f).start()
1024
1025   If the ``freeze_support()`` line is omitted then trying to run the frozen
1026   executable will raise :exc:`RuntimeError`.
1027
1028   Calling ``freeze_support()`` has no effect when invoked on any operating
1029   system other than Windows.  In addition, if the module is being run
1030   normally by the Python interpreter on Windows (the program has not been
1031   frozen), then ``freeze_support()`` has no effect.
1032
1033.. function:: get_all_start_methods()
1034
1035   Returns a list of the supported start methods, the first of which
1036   is the default.  The possible start methods are ``'fork'``,
1037   ``'spawn'`` and ``'forkserver'``.  On Windows only ``'spawn'`` is
1038   available.  On Unix ``'fork'`` and ``'spawn'`` are always
1039   supported, with ``'fork'`` being the default.
1040
1041   .. versionadded:: 3.4
1042
1043.. function:: get_context(method=None)
1044
1045   Return a context object which has the same attributes as the
1046   :mod:`multiprocessing` module.
1047
1048   If *method* is ``None`` then the default context is returned.
1049   Otherwise *method* should be ``'fork'``, ``'spawn'``,
1050   ``'forkserver'``.  :exc:`ValueError` is raised if the specified
1051   start method is not available.
1052
1053   .. versionadded:: 3.4
1054
1055.. function:: get_start_method(allow_none=False)
1056
1057   Return the name of start method used for starting processes.
1058
1059   If the start method has not been fixed and *allow_none* is false,
1060   then the start method is fixed to the default and the name is
1061   returned.  If the start method has not been fixed and *allow_none*
1062   is true then ``None`` is returned.
1063
1064   The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'``
1065   or ``None``.  ``'fork'`` is the default on Unix, while ``'spawn'`` is
1066   the default on Windows and macOS.
1067
1068.. versionchanged:: 3.8
1069
1070   On macOS, the *spawn* start method is now the default.  The *fork* start
1071   method should be considered unsafe as it can lead to crashes of the
1072   subprocess. See :issue:`33725`.
1073
1074   .. versionadded:: 3.4
1075
1076.. function:: set_executable(executable)
1077
1078   Set the path of the Python interpreter to use when starting a child process.
1079   (By default :data:`sys.executable` is used).  Embedders will probably need to
1080   do some thing like ::
1081
1082      set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
1083
1084   before they can create child processes.
1085
1086   .. versionchanged:: 3.4
1087      Now supported on Unix when the ``'spawn'`` start method is used.
1088
1089   .. versionchanged:: 3.11
1090      Accepts a :term:`path-like object`.
1091
1092.. function:: set_start_method(method, force=False)
1093
1094   Set the method which should be used to start child processes.
1095   The *method* argument can be ``'fork'``, ``'spawn'`` or ``'forkserver'``.
1096   Raises :exc:`RuntimeError` if the start method has already been set and *force*
1097   is not ``True``.  If *method* is ``None`` and *force* is ``True`` then the start
1098   method is set to ``None``.  If *method* is ``None`` and *force* is ``False``
1099   then the context is set to the default context.
1100
1101   Note that this should be called at most once, and it should be
1102   protected inside the ``if __name__ == '__main__'`` clause of the
1103   main module.
1104
1105   .. versionadded:: 3.4
1106
1107.. note::
1108
1109   :mod:`multiprocessing` contains no analogues of
1110   :func:`threading.active_count`, :func:`threading.enumerate`,
1111   :func:`threading.settrace`, :func:`threading.setprofile`,
1112   :class:`threading.Timer`, or :class:`threading.local`.
1113
1114
1115Connection Objects
1116~~~~~~~~~~~~~~~~~~
1117
1118.. currentmodule:: multiprocessing.connection
1119
1120Connection objects allow the sending and receiving of picklable objects or
1121strings.  They can be thought of as message oriented connected sockets.
1122
1123Connection objects are usually created using
1124:func:`Pipe <multiprocessing.Pipe>` -- see also
1125:ref:`multiprocessing-listeners-clients`.
1126
1127.. class:: Connection
1128
1129   .. method:: send(obj)
1130
1131      Send an object to the other end of the connection which should be read
1132      using :meth:`recv`.
1133
1134      The object must be picklable.  Very large pickles (approximately 32 MiB+,
1135      though it depends on the OS) may raise a :exc:`ValueError` exception.
1136
1137   .. method:: recv()
1138
1139      Return an object sent from the other end of the connection using
1140      :meth:`send`.  Blocks until there is something to receive.  Raises
1141      :exc:`EOFError` if there is nothing left to receive
1142      and the other end was closed.
1143
1144   .. method:: fileno()
1145
1146      Return the file descriptor or handle used by the connection.
1147
1148   .. method:: close()
1149
1150      Close the connection.
1151
1152      This is called automatically when the connection is garbage collected.
1153
1154   .. method:: poll([timeout])
1155
1156      Return whether there is any data available to be read.
1157
1158      If *timeout* is not specified then it will return immediately.  If
1159      *timeout* is a number then this specifies the maximum time in seconds to
1160      block.  If *timeout* is ``None`` then an infinite timeout is used.
1161
1162      Note that multiple connection objects may be polled at once by
1163      using :func:`multiprocessing.connection.wait`.
1164
1165   .. method:: send_bytes(buffer[, offset[, size]])
1166
1167      Send byte data from a :term:`bytes-like object` as a complete message.
1168
1169      If *offset* is given then data is read from that position in *buffer*.  If
1170      *size* is given then that many bytes will be read from buffer.  Very large
1171      buffers (approximately 32 MiB+, though it depends on the OS) may raise a
1172      :exc:`ValueError` exception
1173
1174   .. method:: recv_bytes([maxlength])
1175
1176      Return a complete message of byte data sent from the other end of the
1177      connection as a string.  Blocks until there is something to receive.
1178      Raises :exc:`EOFError` if there is nothing left
1179      to receive and the other end has closed.
1180
1181      If *maxlength* is specified and the message is longer than *maxlength*
1182      then :exc:`OSError` is raised and the connection will no longer be
1183      readable.
1184
1185      .. versionchanged:: 3.3
1186         This function used to raise :exc:`IOError`, which is now an
1187         alias of :exc:`OSError`.
1188
1189
1190   .. method:: recv_bytes_into(buffer[, offset])
1191
1192      Read into *buffer* a complete message of byte data sent from the other end
1193      of the connection and return the number of bytes in the message.  Blocks
1194      until there is something to receive.  Raises
1195      :exc:`EOFError` if there is nothing left to receive and the other end was
1196      closed.
1197
1198      *buffer* must be a writable :term:`bytes-like object`.  If
1199      *offset* is given then the message will be written into the buffer from
1200      that position.  Offset must be a non-negative integer less than the
1201      length of *buffer* (in bytes).
1202
1203      If the buffer is too short then a :exc:`BufferTooShort` exception is
1204      raised and the complete message is available as ``e.args[0]`` where ``e``
1205      is the exception instance.
1206
1207   .. versionchanged:: 3.3
1208      Connection objects themselves can now be transferred between processes
1209      using :meth:`Connection.send` and :meth:`Connection.recv`.
1210
1211   .. versionadded:: 3.3
1212      Connection objects now support the context management protocol -- see
1213      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
1214      connection object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
1215
1216For example:
1217
1218.. doctest::
1219
1220    >>> from multiprocessing import Pipe
1221    >>> a, b = Pipe()
1222    >>> a.send([1, 'hello', None])
1223    >>> b.recv()
1224    [1, 'hello', None]
1225    >>> b.send_bytes(b'thank you')
1226    >>> a.recv_bytes()
1227    b'thank you'
1228    >>> import array
1229    >>> arr1 = array.array('i', range(5))
1230    >>> arr2 = array.array('i', [0] * 10)
1231    >>> a.send_bytes(arr1)
1232    >>> count = b.recv_bytes_into(arr2)
1233    >>> assert count == len(arr1) * arr1.itemsize
1234    >>> arr2
1235    array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
1236
1237.. _multiprocessing-recv-pickle-security:
1238
1239.. warning::
1240
1241    The :meth:`Connection.recv` method automatically unpickles the data it
1242    receives, which can be a security risk unless you can trust the process
1243    which sent the message.
1244
1245    Therefore, unless the connection object was produced using :func:`Pipe` you
1246    should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
1247    methods after performing some sort of authentication.  See
1248    :ref:`multiprocessing-auth-keys`.
1249
1250.. warning::
1251
1252    If a process is killed while it is trying to read or write to a pipe then
1253    the data in the pipe is likely to become corrupted, because it may become
1254    impossible to be sure where the message boundaries lie.
1255
1256
1257Synchronization primitives
1258~~~~~~~~~~~~~~~~~~~~~~~~~~
1259
1260.. currentmodule:: multiprocessing
1261
1262Generally synchronization primitives are not as necessary in a multiprocess
1263program as they are in a multithreaded program.  See the documentation for
1264:mod:`threading` module.
1265
1266Note that one can also create synchronization primitives by using a manager
1267object -- see :ref:`multiprocessing-managers`.
1268
1269.. class:: Barrier(parties[, action[, timeout]])
1270
1271   A barrier object: a clone of :class:`threading.Barrier`.
1272
1273   .. versionadded:: 3.3
1274
1275.. class:: BoundedSemaphore([value])
1276
1277   A bounded semaphore object: a close analog of
1278   :class:`threading.BoundedSemaphore`.
1279
1280   A solitary difference from its close analog exists: its ``acquire`` method's
1281   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1282
1283   .. note::
1284      On macOS, this is indistinguishable from :class:`Semaphore` because
1285      ``sem_getvalue()`` is not implemented on that platform.
1286
1287.. class:: Condition([lock])
1288
1289   A condition variable: an alias for :class:`threading.Condition`.
1290
1291   If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
1292   object from :mod:`multiprocessing`.
1293
1294   .. versionchanged:: 3.3
1295      The :meth:`~threading.Condition.wait_for` method was added.
1296
1297.. class:: Event()
1298
1299   A clone of :class:`threading.Event`.
1300
1301
1302.. class:: Lock()
1303
1304   A non-recursive lock object: a close analog of :class:`threading.Lock`.
1305   Once a process or thread has acquired a lock, subsequent attempts to
1306   acquire it from any process or thread will block until it is released;
1307   any process or thread may release it.  The concepts and behaviors of
1308   :class:`threading.Lock` as it applies to threads are replicated here in
1309   :class:`multiprocessing.Lock` as it applies to either processes or threads,
1310   except as noted.
1311
1312   Note that :class:`Lock` is actually a factory function which returns an
1313   instance of ``multiprocessing.synchronize.Lock`` initialized with a
1314   default context.
1315
1316   :class:`Lock` supports the :term:`context manager` protocol and thus may be
1317   used in :keyword:`with` statements.
1318
1319   .. method:: acquire(block=True, timeout=None)
1320
1321      Acquire a lock, blocking or non-blocking.
1322
1323      With the *block* argument set to ``True`` (the default), the method call
1324      will block until the lock is in an unlocked state, then set it to locked
1325      and return ``True``.  Note that the name of this first argument differs
1326      from that in :meth:`threading.Lock.acquire`.
1327
1328      With the *block* argument set to ``False``, the method call does not
1329      block.  If the lock is currently in a locked state, return ``False``;
1330      otherwise set the lock to a locked state and return ``True``.
1331
1332      When invoked with a positive, floating-point value for *timeout*, block
1333      for at most the number of seconds specified by *timeout* as long as
1334      the lock can not be acquired.  Invocations with a negative value for
1335      *timeout* are equivalent to a *timeout* of zero.  Invocations with a
1336      *timeout* value of ``None`` (the default) set the timeout period to
1337      infinite.  Note that the treatment of negative or ``None`` values for
1338      *timeout* differs from the implemented behavior in
1339      :meth:`threading.Lock.acquire`.  The *timeout* argument has no practical
1340      implications if the *block* argument is set to ``False`` and is thus
1341      ignored.  Returns ``True`` if the lock has been acquired or ``False`` if
1342      the timeout period has elapsed.
1343
1344
1345   .. method:: release()
1346
1347      Release a lock.  This can be called from any process or thread, not only
1348      the process or thread which originally acquired the lock.
1349
1350      Behavior is the same as in :meth:`threading.Lock.release` except that
1351      when invoked on an unlocked lock, a :exc:`ValueError` is raised.
1352
1353
1354.. class:: RLock()
1355
1356   A recursive lock object: a close analog of :class:`threading.RLock`.  A
1357   recursive lock must be released by the process or thread that acquired it.
1358   Once a process or thread has acquired a recursive lock, the same process
1359   or thread may acquire it again without blocking; that process or thread
1360   must release it once for each time it has been acquired.
1361
1362   Note that :class:`RLock` is actually a factory function which returns an
1363   instance of ``multiprocessing.synchronize.RLock`` initialized with a
1364   default context.
1365
1366   :class:`RLock` supports the :term:`context manager` protocol and thus may be
1367   used in :keyword:`with` statements.
1368
1369
1370   .. method:: acquire(block=True, timeout=None)
1371
1372      Acquire a lock, blocking or non-blocking.
1373
1374      When invoked with the *block* argument set to ``True``, block until the
1375      lock is in an unlocked state (not owned by any process or thread) unless
1376      the lock is already owned by the current process or thread.  The current
1377      process or thread then takes ownership of the lock (if it does not
1378      already have ownership) and the recursion level inside the lock increments
1379      by one, resulting in a return value of ``True``.  Note that there are
1380      several differences in this first argument's behavior compared to the
1381      implementation of :meth:`threading.RLock.acquire`, starting with the name
1382      of the argument itself.
1383
1384      When invoked with the *block* argument set to ``False``, do not block.
1385      If the lock has already been acquired (and thus is owned) by another
1386      process or thread, the current process or thread does not take ownership
1387      and the recursion level within the lock is not changed, resulting in
1388      a return value of ``False``.  If the lock is in an unlocked state, the
1389      current process or thread takes ownership and the recursion level is
1390      incremented, resulting in a return value of ``True``.
1391
1392      Use and behaviors of the *timeout* argument are the same as in
1393      :meth:`Lock.acquire`.  Note that some of these behaviors of *timeout*
1394      differ from the implemented behaviors in :meth:`threading.RLock.acquire`.
1395
1396
1397   .. method:: release()
1398
1399      Release a lock, decrementing the recursion level.  If after the
1400      decrement the recursion level is zero, reset the lock to unlocked (not
1401      owned by any process or thread) and if any other processes or threads
1402      are blocked waiting for the lock to become unlocked, allow exactly one
1403      of them to proceed.  If after the decrement the recursion level is still
1404      nonzero, the lock remains locked and owned by the calling process or
1405      thread.
1406
1407      Only call this method when the calling process or thread owns the lock.
1408      An :exc:`AssertionError` is raised if this method is called by a process
1409      or thread other than the owner or if the lock is in an unlocked (unowned)
1410      state.  Note that the type of exception raised in this situation
1411      differs from the implemented behavior in :meth:`threading.RLock.release`.
1412
1413
1414.. class:: Semaphore([value])
1415
1416   A semaphore object: a close analog of :class:`threading.Semaphore`.
1417
1418   A solitary difference from its close analog exists: its ``acquire`` method's
1419   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1420
1421.. note::
1422
1423   On macOS, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
1424   a timeout will emulate that function's behavior using a sleeping loop.
1425
1426.. note::
1427
1428   If the SIGINT signal generated by :kbd:`Ctrl-C` arrives while the main thread is
1429   blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
1430   :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
1431   or :meth:`Condition.wait` then the call will be immediately interrupted and
1432   :exc:`KeyboardInterrupt` will be raised.
1433
1434   This differs from the behaviour of :mod:`threading` where SIGINT will be
1435   ignored while the equivalent blocking calls are in progress.
1436
1437.. note::
1438
1439   Some of this package's functionality requires a functioning shared semaphore
1440   implementation on the host operating system. Without one, the
1441   :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
1442   import it will result in an :exc:`ImportError`. See
1443   :issue:`3770` for additional information.
1444
1445
1446Shared :mod:`ctypes` Objects
1447~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1448
1449It is possible to create shared objects using shared memory which can be
1450inherited by child processes.
1451
1452.. function:: Value(typecode_or_type, *args, lock=True)
1453
1454   Return a :mod:`ctypes` object allocated from shared memory.  By default the
1455   return value is actually a synchronized wrapper for the object.  The object
1456   itself can be accessed via the *value* attribute of a :class:`Value`.
1457
1458   *typecode_or_type* determines the type of the returned object: it is either a
1459   ctypes type or a one character typecode of the kind used by the :mod:`array`
1460   module.  *\*args* is passed on to the constructor for the type.
1461
1462   If *lock* is ``True`` (the default) then a new recursive lock
1463   object is created to synchronize access to the value.  If *lock* is
1464   a :class:`Lock` or :class:`RLock` object then that will be used to
1465   synchronize access to the value.  If *lock* is ``False`` then
1466   access to the returned object will not be automatically protected
1467   by a lock, so it will not necessarily be "process-safe".
1468
1469   Operations like ``+=`` which involve a read and write are not
1470   atomic.  So if, for instance, you want to atomically increment a
1471   shared value it is insufficient to just do ::
1472
1473       counter.value += 1
1474
1475   Assuming the associated lock is recursive (which it is by default)
1476   you can instead do ::
1477
1478       with counter.get_lock():
1479           counter.value += 1
1480
1481   Note that *lock* is a keyword-only argument.
1482
1483.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1484
1485   Return a ctypes array allocated from shared memory.  By default the return
1486   value is actually a synchronized wrapper for the array.
1487
1488   *typecode_or_type* determines the type of the elements of the returned array:
1489   it is either a ctypes type or a one character typecode of the kind used by
1490   the :mod:`array` module.  If *size_or_initializer* is an integer, then it
1491   determines the length of the array, and the array will be initially zeroed.
1492   Otherwise, *size_or_initializer* is a sequence which is used to initialize
1493   the array and whose length determines the length of the array.
1494
1495   If *lock* is ``True`` (the default) then a new lock object is created to
1496   synchronize access to the value.  If *lock* is a :class:`Lock` or
1497   :class:`RLock` object then that will be used to synchronize access to the
1498   value.  If *lock* is ``False`` then access to the returned object will not be
1499   automatically protected by a lock, so it will not necessarily be
1500   "process-safe".
1501
1502   Note that *lock* is a keyword only argument.
1503
1504   Note that an array of :data:`ctypes.c_char` has *value* and *raw*
1505   attributes which allow one to use it to store and retrieve strings.
1506
1507
1508The :mod:`multiprocessing.sharedctypes` module
1509>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
1510
1511.. module:: multiprocessing.sharedctypes
1512   :synopsis: Allocate ctypes objects from shared memory.
1513
1514The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
1515:mod:`ctypes` objects from shared memory which can be inherited by child
1516processes.
1517
1518.. note::
1519
1520   Although it is possible to store a pointer in shared memory remember that
1521   this will refer to a location in the address space of a specific process.
1522   However, the pointer is quite likely to be invalid in the context of a second
1523   process and trying to dereference the pointer from the second process may
1524   cause a crash.
1525
1526.. function:: RawArray(typecode_or_type, size_or_initializer)
1527
1528   Return a ctypes array allocated from shared memory.
1529
1530   *typecode_or_type* determines the type of the elements of the returned array:
1531   it is either a ctypes type or a one character typecode of the kind used by
1532   the :mod:`array` module.  If *size_or_initializer* is an integer then it
1533   determines the length of the array, and the array will be initially zeroed.
1534   Otherwise *size_or_initializer* is a sequence which is used to initialize the
1535   array and whose length determines the length of the array.
1536
1537   Note that setting and getting an element is potentially non-atomic -- use
1538   :func:`Array` instead to make sure that access is automatically synchronized
1539   using a lock.
1540
1541.. function:: RawValue(typecode_or_type, *args)
1542
1543   Return a ctypes object allocated from shared memory.
1544
1545   *typecode_or_type* determines the type of the returned object: it is either a
1546   ctypes type or a one character typecode of the kind used by the :mod:`array`
1547   module.  *\*args* is passed on to the constructor for the type.
1548
1549   Note that setting and getting the value is potentially non-atomic -- use
1550   :func:`Value` instead to make sure that access is automatically synchronized
1551   using a lock.
1552
1553   Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1554   attributes which allow one to use it to store and retrieve strings -- see
1555   documentation for :mod:`ctypes`.
1556
1557.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1558
1559   The same as :func:`RawArray` except that depending on the value of *lock* a
1560   process-safe synchronization wrapper may be returned instead of a raw ctypes
1561   array.
1562
1563   If *lock* is ``True`` (the default) then a new lock object is created to
1564   synchronize access to the value.  If *lock* is a
1565   :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object
1566   then that will be used to synchronize access to the
1567   value.  If *lock* is ``False`` then access to the returned object will not be
1568   automatically protected by a lock, so it will not necessarily be
1569   "process-safe".
1570
1571   Note that *lock* is a keyword-only argument.
1572
1573.. function:: Value(typecode_or_type, *args, lock=True)
1574
1575   The same as :func:`RawValue` except that depending on the value of *lock* a
1576   process-safe synchronization wrapper may be returned instead of a raw ctypes
1577   object.
1578
1579   If *lock* is ``True`` (the default) then a new lock object is created to
1580   synchronize access to the value.  If *lock* is a :class:`~multiprocessing.Lock` or
1581   :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the
1582   value.  If *lock* is ``False`` then access to the returned object will not be
1583   automatically protected by a lock, so it will not necessarily be
1584   "process-safe".
1585
1586   Note that *lock* is a keyword-only argument.
1587
1588.. function:: copy(obj)
1589
1590   Return a ctypes object allocated from shared memory which is a copy of the
1591   ctypes object *obj*.
1592
1593.. function:: synchronized(obj[, lock])
1594
1595   Return a process-safe wrapper object for a ctypes object which uses *lock* to
1596   synchronize access.  If *lock* is ``None`` (the default) then a
1597   :class:`multiprocessing.RLock` object is created automatically.
1598
1599   A synchronized wrapper will have two methods in addition to those of the
1600   object it wraps: :meth:`get_obj` returns the wrapped object and
1601   :meth:`get_lock` returns the lock object used for synchronization.
1602
1603   Note that accessing the ctypes object through the wrapper can be a lot slower
1604   than accessing the raw ctypes object.
1605
1606   .. versionchanged:: 3.5
1607      Synchronized objects support the :term:`context manager` protocol.
1608
1609
1610The table below compares the syntax for creating shared ctypes objects from
1611shared memory with the normal ctypes syntax.  (In the table ``MyStruct`` is some
1612subclass of :class:`ctypes.Structure`.)
1613
1614==================== ========================== ===========================
1615ctypes               sharedctypes using type    sharedctypes using typecode
1616==================== ========================== ===========================
1617c_double(2.4)        RawValue(c_double, 2.4)    RawValue('d', 2.4)
1618MyStruct(4, 6)       RawValue(MyStruct, 4, 6)
1619(c_short * 7)()      RawArray(c_short, 7)       RawArray('h', 7)
1620(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1621==================== ========================== ===========================
1622
1623
1624Below is an example where a number of ctypes objects are modified by a child
1625process::
1626
1627   from multiprocessing import Process, Lock
1628   from multiprocessing.sharedctypes import Value, Array
1629   from ctypes import Structure, c_double
1630
1631   class Point(Structure):
1632       _fields_ = [('x', c_double), ('y', c_double)]
1633
1634   def modify(n, x, s, A):
1635       n.value **= 2
1636       x.value **= 2
1637       s.value = s.value.upper()
1638       for a in A:
1639           a.x **= 2
1640           a.y **= 2
1641
1642   if __name__ == '__main__':
1643       lock = Lock()
1644
1645       n = Value('i', 7)
1646       x = Value(c_double, 1.0/3.0, lock=False)
1647       s = Array('c', b'hello world', lock=lock)
1648       A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1649
1650       p = Process(target=modify, args=(n, x, s, A))
1651       p.start()
1652       p.join()
1653
1654       print(n.value)
1655       print(x.value)
1656       print(s.value)
1657       print([(a.x, a.y) for a in A])
1658
1659
1660.. highlight:: none
1661
1662The results printed are ::
1663
1664    49
1665    0.1111111111111111
1666    HELLO WORLD
1667    [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1668
1669.. highlight:: python3
1670
1671
1672.. _multiprocessing-managers:
1673
1674Managers
1675~~~~~~~~
1676
1677Managers provide a way to create data which can be shared between different
1678processes, including sharing over a network between processes running on
1679different machines. A manager object controls a server process which manages
1680*shared objects*.  Other processes can access the shared objects by using
1681proxies.
1682
1683.. function:: multiprocessing.Manager()
1684   :module:
1685
1686   Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1687   can be used for sharing objects between processes.  The returned manager
1688   object corresponds to a spawned child process and has methods which will
1689   create shared objects and return corresponding proxies.
1690
1691.. module:: multiprocessing.managers
1692   :synopsis: Share data between process with shared objects.
1693
1694Manager processes will be shutdown as soon as they are garbage collected or
1695their parent process exits.  The manager classes are defined in the
1696:mod:`multiprocessing.managers` module:
1697
1698.. class:: BaseManager(address=None, authkey=None, serializer='pickle', ctx=None, *, shutdown_timeout=1.0)
1699
1700   Create a BaseManager object.
1701
1702   Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
1703   that the manager object refers to a started manager process.
1704
1705   *address* is the address on which the manager process listens for new
1706   connections.  If *address* is ``None`` then an arbitrary one is chosen.
1707
1708   *authkey* is the authentication key which will be used to check the
1709   validity of incoming connections to the server process.  If
1710   *authkey* is ``None`` then ``current_process().authkey`` is used.
1711   Otherwise *authkey* is used and it must be a byte string.
1712
1713   *serializer* must be ``'pickle'`` (use :mod:`pickle` serialization) or
1714   ``'xmlrpclib'`` (use :mod:`xmlrpc.client` serialization).
1715
1716   *ctx* is a context object, or ``None`` (use the current context). See the
1717   :func:`get_context` function.
1718
1719   *shutdown_timeout* is a timeout in seconds used to wait until the process
1720   used by the manager completes in the :meth:`shutdown` method. If the
1721   shutdown times out, the process is terminated. If terminating the process
1722   also times out, the process is killed.
1723
1724   .. versionchanged:: 3.11
1725      Added the *shutdown_timeout* parameter.
1726
1727   .. method:: start([initializer[, initargs]])
1728
1729      Start a subprocess to start the manager.  If *initializer* is not ``None``
1730      then the subprocess will call ``initializer(*initargs)`` when it starts.
1731
1732   .. method:: get_server()
1733
1734      Returns a :class:`Server` object which represents the actual server under
1735      the control of the Manager. The :class:`Server` object supports the
1736      :meth:`serve_forever` method::
1737
1738      >>> from multiprocessing.managers import BaseManager
1739      >>> manager = BaseManager(address=('', 50000), authkey=b'abc')
1740      >>> server = manager.get_server()
1741      >>> server.serve_forever()
1742
1743      :class:`Server` additionally has an :attr:`address` attribute.
1744
1745   .. method:: connect()
1746
1747      Connect a local manager object to a remote manager process::
1748
1749      >>> from multiprocessing.managers import BaseManager
1750      >>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc')
1751      >>> m.connect()
1752
1753   .. method:: shutdown()
1754
1755      Stop the process used by the manager.  This is only available if
1756      :meth:`start` has been used to start the server process.
1757
1758      This can be called multiple times.
1759
1760   .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1761
1762      A classmethod which can be used for registering a type or callable with
1763      the manager class.
1764
1765      *typeid* is a "type identifier" which is used to identify a particular
1766      type of shared object.  This must be a string.
1767
1768      *callable* is a callable used for creating objects for this type
1769      identifier.  If a manager instance will be connected to the
1770      server using the :meth:`connect` method, or if the
1771      *create_method* argument is ``False`` then this can be left as
1772      ``None``.
1773
1774      *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1775      proxies for shared objects with this *typeid*.  If ``None`` then a proxy
1776      class is created automatically.
1777
1778      *exposed* is used to specify a sequence of method names which proxies for
1779      this typeid should be allowed to access using
1780      :meth:`BaseProxy._callmethod`.  (If *exposed* is ``None`` then
1781      :attr:`proxytype._exposed_` is used instead if it exists.)  In the case
1782      where no exposed list is specified, all "public methods" of the shared
1783      object will be accessible.  (Here a "public method" means any attribute
1784      which has a :meth:`~object.__call__` method and whose name does not begin
1785      with ``'_'``.)
1786
1787      *method_to_typeid* is a mapping used to specify the return type of those
1788      exposed methods which should return a proxy.  It maps method names to
1789      typeid strings.  (If *method_to_typeid* is ``None`` then
1790      :attr:`proxytype._method_to_typeid_` is used instead if it exists.)  If a
1791      method's name is not a key of this mapping or if the mapping is ``None``
1792      then the object returned by the method will be copied by value.
1793
1794      *create_method* determines whether a method should be created with name
1795      *typeid* which can be used to tell the server process to create a new
1796      shared object and return a proxy for it.  By default it is ``True``.
1797
1798   :class:`BaseManager` instances also have one read-only property:
1799
1800   .. attribute:: address
1801
1802      The address used by the manager.
1803
1804   .. versionchanged:: 3.3
1805      Manager objects support the context management protocol -- see
1806      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` starts the
1807      server process (if it has not already started) and then returns the
1808      manager object.  :meth:`~contextmanager.__exit__` calls :meth:`shutdown`.
1809
1810      In previous versions :meth:`~contextmanager.__enter__` did not start the
1811      manager's server process if it was not already started.
1812
1813.. class:: SyncManager
1814
1815   A subclass of :class:`BaseManager` which can be used for the synchronization
1816   of processes.  Objects of this type are returned by
1817   :func:`multiprocessing.Manager`.
1818
1819   Its methods create and return :ref:`multiprocessing-proxy_objects` for a
1820   number of commonly used data types to be synchronized across processes.
1821   This notably includes shared lists and dictionaries.
1822
1823   .. method:: Barrier(parties[, action[, timeout]])
1824
1825      Create a shared :class:`threading.Barrier` object and return a
1826      proxy for it.
1827
1828      .. versionadded:: 3.3
1829
1830   .. method:: BoundedSemaphore([value])
1831
1832      Create a shared :class:`threading.BoundedSemaphore` object and return a
1833      proxy for it.
1834
1835   .. method:: Condition([lock])
1836
1837      Create a shared :class:`threading.Condition` object and return a proxy for
1838      it.
1839
1840      If *lock* is supplied then it should be a proxy for a
1841      :class:`threading.Lock` or :class:`threading.RLock` object.
1842
1843      .. versionchanged:: 3.3
1844         The :meth:`~threading.Condition.wait_for` method was added.
1845
1846   .. method:: Event()
1847
1848      Create a shared :class:`threading.Event` object and return a proxy for it.
1849
1850   .. method:: Lock()
1851
1852      Create a shared :class:`threading.Lock` object and return a proxy for it.
1853
1854   .. method:: Namespace()
1855
1856      Create a shared :class:`Namespace` object and return a proxy for it.
1857
1858   .. method:: Queue([maxsize])
1859
1860      Create a shared :class:`queue.Queue` object and return a proxy for it.
1861
1862   .. method:: RLock()
1863
1864      Create a shared :class:`threading.RLock` object and return a proxy for it.
1865
1866   .. method:: Semaphore([value])
1867
1868      Create a shared :class:`threading.Semaphore` object and return a proxy for
1869      it.
1870
1871   .. method:: Array(typecode, sequence)
1872
1873      Create an array and return a proxy for it.
1874
1875   .. method:: Value(typecode, value)
1876
1877      Create an object with a writable ``value`` attribute and return a proxy
1878      for it.
1879
1880   .. method:: dict()
1881               dict(mapping)
1882               dict(sequence)
1883
1884      Create a shared :class:`dict` object and return a proxy for it.
1885
1886   .. method:: list()
1887               list(sequence)
1888
1889      Create a shared :class:`list` object and return a proxy for it.
1890
1891   .. versionchanged:: 3.6
1892      Shared objects are capable of being nested.  For example, a shared
1893      container object such as a shared list can contain other shared objects
1894      which will all be managed and synchronized by the :class:`SyncManager`.
1895
1896.. class:: Namespace
1897
1898   A type that can register with :class:`SyncManager`.
1899
1900   A namespace object has no public methods, but does have writable attributes.
1901   Its representation shows the values of its attributes.
1902
1903   However, when using a proxy for a namespace object, an attribute beginning
1904   with ``'_'`` will be an attribute of the proxy and not an attribute of the
1905   referent:
1906
1907   .. doctest::
1908
1909    >>> manager = multiprocessing.Manager()
1910    >>> Global = manager.Namespace()
1911    >>> Global.x = 10
1912    >>> Global.y = 'hello'
1913    >>> Global._z = 12.3    # this is an attribute of the proxy
1914    >>> print(Global)
1915    Namespace(x=10, y='hello')
1916
1917
1918Customized managers
1919>>>>>>>>>>>>>>>>>>>
1920
1921To create one's own manager, one creates a subclass of :class:`BaseManager` and
1922uses the :meth:`~BaseManager.register` classmethod to register new types or
1923callables with the manager class.  For example::
1924
1925   from multiprocessing.managers import BaseManager
1926
1927   class MathsClass:
1928       def add(self, x, y):
1929           return x + y
1930       def mul(self, x, y):
1931           return x * y
1932
1933   class MyManager(BaseManager):
1934       pass
1935
1936   MyManager.register('Maths', MathsClass)
1937
1938   if __name__ == '__main__':
1939       with MyManager() as manager:
1940           maths = manager.Maths()
1941           print(maths.add(4, 3))         # prints 7
1942           print(maths.mul(7, 8))         # prints 56
1943
1944
1945Using a remote manager
1946>>>>>>>>>>>>>>>>>>>>>>
1947
1948It is possible to run a manager server on one machine and have clients use it
1949from other machines (assuming that the firewalls involved allow it).
1950
1951Running the following commands creates a server for a single shared queue which
1952remote clients can access::
1953
1954   >>> from multiprocessing.managers import BaseManager
1955   >>> from queue import Queue
1956   >>> queue = Queue()
1957   >>> class QueueManager(BaseManager): pass
1958   >>> QueueManager.register('get_queue', callable=lambda:queue)
1959   >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1960   >>> s = m.get_server()
1961   >>> s.serve_forever()
1962
1963One client can access the server as follows::
1964
1965   >>> from multiprocessing.managers import BaseManager
1966   >>> class QueueManager(BaseManager): pass
1967   >>> QueueManager.register('get_queue')
1968   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1969   >>> m.connect()
1970   >>> queue = m.get_queue()
1971   >>> queue.put('hello')
1972
1973Another client can also use it::
1974
1975   >>> from multiprocessing.managers import BaseManager
1976   >>> class QueueManager(BaseManager): pass
1977   >>> QueueManager.register('get_queue')
1978   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1979   >>> m.connect()
1980   >>> queue = m.get_queue()
1981   >>> queue.get()
1982   'hello'
1983
1984Local processes can also access that queue, using the code from above on the
1985client to access it remotely::
1986
1987    >>> from multiprocessing import Process, Queue
1988    >>> from multiprocessing.managers import BaseManager
1989    >>> class Worker(Process):
1990    ...     def __init__(self, q):
1991    ...         self.q = q
1992    ...         super().__init__()
1993    ...     def run(self):
1994    ...         self.q.put('local hello')
1995    ...
1996    >>> queue = Queue()
1997    >>> w = Worker(queue)
1998    >>> w.start()
1999    >>> class QueueManager(BaseManager): pass
2000    ...
2001    >>> QueueManager.register('get_queue', callable=lambda: queue)
2002    >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
2003    >>> s = m.get_server()
2004    >>> s.serve_forever()
2005
2006.. _multiprocessing-proxy_objects:
2007
2008Proxy Objects
2009~~~~~~~~~~~~~
2010
2011A proxy is an object which *refers* to a shared object which lives (presumably)
2012in a different process.  The shared object is said to be the *referent* of the
2013proxy.  Multiple proxy objects may have the same referent.
2014
2015A proxy object has methods which invoke corresponding methods of its referent
2016(although not every method of the referent will necessarily be available through
2017the proxy).  In this way, a proxy can be used just like its referent can:
2018
2019.. doctest::
2020
2021   >>> from multiprocessing import Manager
2022   >>> manager = Manager()
2023   >>> l = manager.list([i*i for i in range(10)])
2024   >>> print(l)
2025   [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
2026   >>> print(repr(l))
2027   <ListProxy object, typeid 'list' at 0x...>
2028   >>> l[4]
2029   16
2030   >>> l[2:5]
2031   [4, 9, 16]
2032
2033Notice that applying :func:`str` to a proxy will return the representation of
2034the referent, whereas applying :func:`repr` will return the representation of
2035the proxy.
2036
2037An important feature of proxy objects is that they are picklable so they can be
2038passed between processes.  As such, a referent can contain
2039:ref:`multiprocessing-proxy_objects`.  This permits nesting of these managed
2040lists, dicts, and other :ref:`multiprocessing-proxy_objects`:
2041
2042.. doctest::
2043
2044   >>> a = manager.list()
2045   >>> b = manager.list()
2046   >>> a.append(b)         # referent of a now contains referent of b
2047   >>> print(a, b)
2048   [<ListProxy object, typeid 'list' at ...>] []
2049   >>> b.append('hello')
2050   >>> print(a[0], b)
2051   ['hello'] ['hello']
2052
2053Similarly, dict and list proxies may be nested inside one another::
2054
2055   >>> l_outer = manager.list([ manager.dict() for i in range(2) ])
2056   >>> d_first_inner = l_outer[0]
2057   >>> d_first_inner['a'] = 1
2058   >>> d_first_inner['b'] = 2
2059   >>> l_outer[1]['c'] = 3
2060   >>> l_outer[1]['z'] = 26
2061   >>> print(l_outer[0])
2062   {'a': 1, 'b': 2}
2063   >>> print(l_outer[1])
2064   {'c': 3, 'z': 26}
2065
2066If standard (non-proxy) :class:`list` or :class:`dict` objects are contained
2067in a referent, modifications to those mutable values will not be propagated
2068through the manager because the proxy has no way of knowing when the values
2069contained within are modified.  However, storing a value in a container proxy
2070(which triggers a ``__setitem__`` on the proxy object) does propagate through
2071the manager and so to effectively modify such an item, one could re-assign the
2072modified value to the container proxy::
2073
2074   # create a list proxy and append a mutable object (a dictionary)
2075   lproxy = manager.list()
2076   lproxy.append({})
2077   # now mutate the dictionary
2078   d = lproxy[0]
2079   d['a'] = 1
2080   d['b'] = 2
2081   # at this point, the changes to d are not yet synced, but by
2082   # updating the dictionary, the proxy is notified of the change
2083   lproxy[0] = d
2084
2085This approach is perhaps less convenient than employing nested
2086:ref:`multiprocessing-proxy_objects` for most use cases but also
2087demonstrates a level of control over the synchronization.
2088
2089.. note::
2090
2091   The proxy types in :mod:`multiprocessing` do nothing to support comparisons
2092   by value.  So, for instance, we have:
2093
2094   .. doctest::
2095
2096       >>> manager.list([1,2,3]) == [1,2,3]
2097       False
2098
2099   One should just use a copy of the referent instead when making comparisons.
2100
2101.. class:: BaseProxy
2102
2103   Proxy objects are instances of subclasses of :class:`BaseProxy`.
2104
2105   .. method:: _callmethod(methodname[, args[, kwds]])
2106
2107      Call and return the result of a method of the proxy's referent.
2108
2109      If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
2110
2111         proxy._callmethod(methodname, args, kwds)
2112
2113      will evaluate the expression ::
2114
2115         getattr(obj, methodname)(*args, **kwds)
2116
2117      in the manager's process.
2118
2119      The returned value will be a copy of the result of the call or a proxy to
2120      a new shared object -- see documentation for the *method_to_typeid*
2121      argument of :meth:`BaseManager.register`.
2122
2123      If an exception is raised by the call, then is re-raised by
2124      :meth:`_callmethod`.  If some other exception is raised in the manager's
2125      process then this is converted into a :exc:`RemoteError` exception and is
2126      raised by :meth:`_callmethod`.
2127
2128      Note in particular that an exception will be raised if *methodname* has
2129      not been *exposed*.
2130
2131      An example of the usage of :meth:`_callmethod`:
2132
2133      .. doctest::
2134
2135         >>> l = manager.list(range(10))
2136         >>> l._callmethod('__len__')
2137         10
2138         >>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
2139         [2, 3, 4, 5, 6]
2140         >>> l._callmethod('__getitem__', (20,))          # equivalent to l[20]
2141         Traceback (most recent call last):
2142         ...
2143         IndexError: list index out of range
2144
2145   .. method:: _getvalue()
2146
2147      Return a copy of the referent.
2148
2149      If the referent is unpicklable then this will raise an exception.
2150
2151   .. method:: __repr__
2152
2153      Return a representation of the proxy object.
2154
2155   .. method:: __str__
2156
2157      Return the representation of the referent.
2158
2159
2160Cleanup
2161>>>>>>>
2162
2163A proxy object uses a weakref callback so that when it gets garbage collected it
2164deregisters itself from the manager which owns its referent.
2165
2166A shared object gets deleted from the manager process when there are no longer
2167any proxies referring to it.
2168
2169
2170Process Pools
2171~~~~~~~~~~~~~
2172
2173.. module:: multiprocessing.pool
2174   :synopsis: Create pools of processes.
2175
2176One can create a pool of processes which will carry out tasks submitted to it
2177with the :class:`Pool` class.
2178
2179.. class:: Pool([processes[, initializer[, initargs[, maxtasksperchild [, context]]]]])
2180
2181   A process pool object which controls a pool of worker processes to which jobs
2182   can be submitted.  It supports asynchronous results with timeouts and
2183   callbacks and has a parallel map implementation.
2184
2185   *processes* is the number of worker processes to use.  If *processes* is
2186   ``None`` then the number returned by :func:`os.cpu_count` is used.
2187
2188   If *initializer* is not ``None`` then each worker process will call
2189   ``initializer(*initargs)`` when it starts.
2190
2191   *maxtasksperchild* is the number of tasks a worker process can complete
2192   before it will exit and be replaced with a fresh worker process, to enable
2193   unused resources to be freed. The default *maxtasksperchild* is ``None``, which
2194   means worker processes will live as long as the pool.
2195
2196   *context* can be used to specify the context used for starting
2197   the worker processes.  Usually a pool is created using the
2198   function :func:`multiprocessing.Pool` or the :meth:`Pool` method
2199   of a context object.  In both cases *context* is set
2200   appropriately.
2201
2202   Note that the methods of the pool object should only be called by
2203   the process which created the pool.
2204
2205   .. warning::
2206      :class:`multiprocessing.pool` objects have internal resources that need to be
2207      properly managed (like any other resource) by using the pool as a context manager
2208      or by calling :meth:`close` and :meth:`terminate` manually. Failure to do this
2209      can lead to the process hanging on finalization.
2210
2211      Note that it is **not correct** to rely on the garbage collector to destroy the pool
2212      as CPython does not assure that the finalizer of the pool will be called
2213      (see :meth:`object.__del__` for more information).
2214
2215   .. versionadded:: 3.2
2216      *maxtasksperchild*
2217
2218   .. versionadded:: 3.4
2219      *context*
2220
2221   .. note::
2222
2223      Worker processes within a :class:`Pool` typically live for the complete
2224      duration of the Pool's work queue. A frequent pattern found in other
2225      systems (such as Apache, mod_wsgi, etc) to free resources held by
2226      workers is to allow a worker within a pool to complete only a set
2227      amount of work before being exiting, being cleaned up and a new
2228      process spawned to replace the old one. The *maxtasksperchild*
2229      argument to the :class:`Pool` exposes this ability to the end user.
2230
2231   .. method:: apply(func[, args[, kwds]])
2232
2233      Call *func* with arguments *args* and keyword arguments *kwds*.  It blocks
2234      until the result is ready. Given this blocks, :meth:`apply_async` is
2235      better suited for performing work in parallel. Additionally, *func*
2236      is only executed in one of the workers of the pool.
2237
2238   .. method:: apply_async(func[, args[, kwds[, callback[, error_callback]]]])
2239
2240      A variant of the :meth:`apply` method which returns a
2241      :class:`~multiprocessing.pool.AsyncResult` object.
2242
2243      If *callback* is specified then it should be a callable which accepts a
2244      single argument.  When the result becomes ready *callback* is applied to
2245      it, that is unless the call failed, in which case the *error_callback*
2246      is applied instead.
2247
2248      If *error_callback* is specified then it should be a callable which
2249      accepts a single argument.  If the target function fails, then
2250      the *error_callback* is called with the exception instance.
2251
2252      Callbacks should complete immediately since otherwise the thread which
2253      handles the results will get blocked.
2254
2255   .. method:: map(func, iterable[, chunksize])
2256
2257      A parallel equivalent of the :func:`map` built-in function (it supports only
2258      one *iterable* argument though, for multiple iterables see :meth:`starmap`).
2259      It blocks until the result is ready.
2260
2261      This method chops the iterable into a number of chunks which it submits to
2262      the process pool as separate tasks.  The (approximate) size of these
2263      chunks can be specified by setting *chunksize* to a positive integer.
2264
2265      Note that it may cause high memory usage for very long iterables. Consider
2266      using :meth:`imap` or :meth:`imap_unordered` with explicit *chunksize*
2267      option for better efficiency.
2268
2269   .. method:: map_async(func, iterable[, chunksize[, callback[, error_callback]]])
2270
2271      A variant of the :meth:`.map` method which returns a
2272      :class:`~multiprocessing.pool.AsyncResult` object.
2273
2274      If *callback* is specified then it should be a callable which accepts a
2275      single argument.  When the result becomes ready *callback* is applied to
2276      it, that is unless the call failed, in which case the *error_callback*
2277      is applied instead.
2278
2279      If *error_callback* is specified then it should be a callable which
2280      accepts a single argument.  If the target function fails, then
2281      the *error_callback* is called with the exception instance.
2282
2283      Callbacks should complete immediately since otherwise the thread which
2284      handles the results will get blocked.
2285
2286   .. method:: imap(func, iterable[, chunksize])
2287
2288      A lazier version of :meth:`.map`.
2289
2290      The *chunksize* argument is the same as the one used by the :meth:`.map`
2291      method.  For very long iterables using a large value for *chunksize* can
2292      make the job complete **much** faster than using the default value of
2293      ``1``.
2294
2295      Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
2296      returned by the :meth:`imap` method has an optional *timeout* parameter:
2297      ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
2298      result cannot be returned within *timeout* seconds.
2299
2300   .. method:: imap_unordered(func, iterable[, chunksize])
2301
2302      The same as :meth:`imap` except that the ordering of the results from the
2303      returned iterator should be considered arbitrary.  (Only when there is
2304      only one worker process is the order guaranteed to be "correct".)
2305
2306   .. method:: starmap(func, iterable[, chunksize])
2307
2308      Like :meth:`~multiprocessing.pool.Pool.map` except that the
2309      elements of the *iterable* are expected to be iterables that are
2310      unpacked as arguments.
2311
2312      Hence an *iterable* of ``[(1,2), (3, 4)]`` results in ``[func(1,2),
2313      func(3,4)]``.
2314
2315      .. versionadded:: 3.3
2316
2317   .. method:: starmap_async(func, iterable[, chunksize[, callback[, error_callback]]])
2318
2319      A combination of :meth:`starmap` and :meth:`map_async` that iterates over
2320      *iterable* of iterables and calls *func* with the iterables unpacked.
2321      Returns a result object.
2322
2323      .. versionadded:: 3.3
2324
2325   .. method:: close()
2326
2327      Prevents any more tasks from being submitted to the pool.  Once all the
2328      tasks have been completed the worker processes will exit.
2329
2330   .. method:: terminate()
2331
2332      Stops the worker processes immediately without completing outstanding
2333      work.  When the pool object is garbage collected :meth:`terminate` will be
2334      called immediately.
2335
2336   .. method:: join()
2337
2338      Wait for the worker processes to exit.  One must call :meth:`close` or
2339      :meth:`terminate` before using :meth:`join`.
2340
2341   .. versionadded:: 3.3
2342      Pool objects now support the context management protocol -- see
2343      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2344      pool object, and :meth:`~contextmanager.__exit__` calls :meth:`terminate`.
2345
2346
2347.. class:: AsyncResult
2348
2349   The class of the result returned by :meth:`Pool.apply_async` and
2350   :meth:`Pool.map_async`.
2351
2352   .. method:: get([timeout])
2353
2354      Return the result when it arrives.  If *timeout* is not ``None`` and the
2355      result does not arrive within *timeout* seconds then
2356      :exc:`multiprocessing.TimeoutError` is raised.  If the remote call raised
2357      an exception then that exception will be reraised by :meth:`get`.
2358
2359   .. method:: wait([timeout])
2360
2361      Wait until the result is available or until *timeout* seconds pass.
2362
2363   .. method:: ready()
2364
2365      Return whether the call has completed.
2366
2367   .. method:: successful()
2368
2369      Return whether the call completed without raising an exception.  Will
2370      raise :exc:`ValueError` if the result is not ready.
2371
2372      .. versionchanged:: 3.7
2373         If the result is not ready, :exc:`ValueError` is raised instead of
2374         :exc:`AssertionError`.
2375
2376The following example demonstrates the use of a pool::
2377
2378   from multiprocessing import Pool
2379   import time
2380
2381   def f(x):
2382       return x*x
2383
2384   if __name__ == '__main__':
2385       with Pool(processes=4) as pool:         # start 4 worker processes
2386           result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
2387           print(result.get(timeout=1))        # prints "100" unless your computer is *very* slow
2388
2389           print(pool.map(f, range(10)))       # prints "[0, 1, 4,..., 81]"
2390
2391           it = pool.imap(f, range(10))
2392           print(next(it))                     # prints "0"
2393           print(next(it))                     # prints "1"
2394           print(it.next(timeout=1))           # prints "4" unless your computer is *very* slow
2395
2396           result = pool.apply_async(time.sleep, (10,))
2397           print(result.get(timeout=1))        # raises multiprocessing.TimeoutError
2398
2399
2400.. _multiprocessing-listeners-clients:
2401
2402Listeners and Clients
2403~~~~~~~~~~~~~~~~~~~~~
2404
2405.. module:: multiprocessing.connection
2406   :synopsis: API for dealing with sockets.
2407
2408Usually message passing between processes is done using queues or by using
2409:class:`~Connection` objects returned by
2410:func:`~multiprocessing.Pipe`.
2411
2412However, the :mod:`multiprocessing.connection` module allows some extra
2413flexibility.  It basically gives a high level message oriented API for dealing
2414with sockets or Windows named pipes.  It also has support for *digest
2415authentication* using the :mod:`hmac` module, and for polling
2416multiple connections at the same time.
2417
2418
2419.. function:: deliver_challenge(connection, authkey)
2420
2421   Send a randomly generated message to the other end of the connection and wait
2422   for a reply.
2423
2424   If the reply matches the digest of the message using *authkey* as the key
2425   then a welcome message is sent to the other end of the connection.  Otherwise
2426   :exc:`~multiprocessing.AuthenticationError` is raised.
2427
2428.. function:: answer_challenge(connection, authkey)
2429
2430   Receive a message, calculate the digest of the message using *authkey* as the
2431   key, and then send the digest back.
2432
2433   If a welcome message is not received, then
2434   :exc:`~multiprocessing.AuthenticationError` is raised.
2435
2436.. function:: Client(address[, family[, authkey]])
2437
2438   Attempt to set up a connection to the listener which is using address
2439   *address*, returning a :class:`~Connection`.
2440
2441   The type of the connection is determined by *family* argument, but this can
2442   generally be omitted since it can usually be inferred from the format of
2443   *address*. (See :ref:`multiprocessing-address-formats`)
2444
2445   If *authkey* is given and not None, it should be a byte string and will be
2446   used as the secret key for an HMAC-based authentication challenge. No
2447   authentication is done if *authkey* is None.
2448   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2449   See :ref:`multiprocessing-auth-keys`.
2450
2451.. class:: Listener([address[, family[, backlog[, authkey]]]])
2452
2453   A wrapper for a bound socket or Windows named pipe which is 'listening' for
2454   connections.
2455
2456   *address* is the address to be used by the bound socket or named pipe of the
2457   listener object.
2458
2459   .. note::
2460
2461      If an address of '0.0.0.0' is used, the address will not be a connectable
2462      end point on Windows. If you require a connectable end-point,
2463      you should use '127.0.0.1'.
2464
2465   *family* is the type of socket (or named pipe) to use.  This can be one of
2466   the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
2467   domain socket) or ``'AF_PIPE'`` (for a Windows named pipe).  Of these only
2468   the first is guaranteed to be available.  If *family* is ``None`` then the
2469   family is inferred from the format of *address*.  If *address* is also
2470   ``None`` then a default is chosen.  This default is the family which is
2471   assumed to be the fastest available.  See
2472   :ref:`multiprocessing-address-formats`.  Note that if *family* is
2473   ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
2474   private temporary directory created using :func:`tempfile.mkstemp`.
2475
2476   If the listener object uses a socket then *backlog* (1 by default) is passed
2477   to the :meth:`~socket.socket.listen` method of the socket once it has been
2478   bound.
2479
2480   If *authkey* is given and not None, it should be a byte string and will be
2481   used as the secret key for an HMAC-based authentication challenge. No
2482   authentication is done if *authkey* is None.
2483   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2484   See :ref:`multiprocessing-auth-keys`.
2485
2486   .. method:: accept()
2487
2488      Accept a connection on the bound socket or named pipe of the listener
2489      object and return a :class:`~Connection` object.
2490      If authentication is attempted and fails, then
2491      :exc:`~multiprocessing.AuthenticationError` is raised.
2492
2493   .. method:: close()
2494
2495      Close the bound socket or named pipe of the listener object.  This is
2496      called automatically when the listener is garbage collected.  However it
2497      is advisable to call it explicitly.
2498
2499   Listener objects have the following read-only properties:
2500
2501   .. attribute:: address
2502
2503      The address which is being used by the Listener object.
2504
2505   .. attribute:: last_accepted
2506
2507      The address from which the last accepted connection came.  If this is
2508      unavailable then it is ``None``.
2509
2510   .. versionadded:: 3.3
2511      Listener objects now support the context management protocol -- see
2512      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2513      listener object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
2514
2515.. function:: wait(object_list, timeout=None)
2516
2517   Wait till an object in *object_list* is ready.  Returns the list of
2518   those objects in *object_list* which are ready.  If *timeout* is a
2519   float then the call blocks for at most that many seconds.  If
2520   *timeout* is ``None`` then it will block for an unlimited period.
2521   A negative timeout is equivalent to a zero timeout.
2522
2523   For both Unix and Windows, an object can appear in *object_list* if
2524   it is
2525
2526   * a readable :class:`~multiprocessing.connection.Connection` object;
2527   * a connected and readable :class:`socket.socket` object; or
2528   * the :attr:`~multiprocessing.Process.sentinel` attribute of a
2529     :class:`~multiprocessing.Process` object.
2530
2531   A connection or socket object is ready when there is data available
2532   to be read from it, or the other end has been closed.
2533
2534   **Unix**: ``wait(object_list, timeout)`` almost equivalent
2535   ``select.select(object_list, [], [], timeout)``.  The difference is
2536   that, if :func:`select.select` is interrupted by a signal, it can
2537   raise :exc:`OSError` with an error number of ``EINTR``, whereas
2538   :func:`wait` will not.
2539
2540   **Windows**: An item in *object_list* must either be an integer
2541   handle which is waitable (according to the definition used by the
2542   documentation of the Win32 function ``WaitForMultipleObjects()``)
2543   or it can be an object with a :meth:`fileno` method which returns a
2544   socket handle or pipe handle.  (Note that pipe handles and socket
2545   handles are **not** waitable handles.)
2546
2547   .. versionadded:: 3.3
2548
2549
2550**Examples**
2551
2552The following server code creates a listener which uses ``'secret password'`` as
2553an authentication key.  It then waits for a connection and sends some data to
2554the client::
2555
2556   from multiprocessing.connection import Listener
2557   from array import array
2558
2559   address = ('localhost', 6000)     # family is deduced to be 'AF_INET'
2560
2561   with Listener(address, authkey=b'secret password') as listener:
2562       with listener.accept() as conn:
2563           print('connection accepted from', listener.last_accepted)
2564
2565           conn.send([2.25, None, 'junk', float])
2566
2567           conn.send_bytes(b'hello')
2568
2569           conn.send_bytes(array('i', [42, 1729]))
2570
2571The following code connects to the server and receives some data from the
2572server::
2573
2574   from multiprocessing.connection import Client
2575   from array import array
2576
2577   address = ('localhost', 6000)
2578
2579   with Client(address, authkey=b'secret password') as conn:
2580       print(conn.recv())                  # => [2.25, None, 'junk', float]
2581
2582       print(conn.recv_bytes())            # => 'hello'
2583
2584       arr = array('i', [0, 0, 0, 0, 0])
2585       print(conn.recv_bytes_into(arr))    # => 8
2586       print(arr)                          # => array('i', [42, 1729, 0, 0, 0])
2587
2588The following code uses :func:`~multiprocessing.connection.wait` to
2589wait for messages from multiple processes at once::
2590
2591   import time, random
2592   from multiprocessing import Process, Pipe, current_process
2593   from multiprocessing.connection import wait
2594
2595   def foo(w):
2596       for i in range(10):
2597           w.send((i, current_process().name))
2598       w.close()
2599
2600   if __name__ == '__main__':
2601       readers = []
2602
2603       for i in range(4):
2604           r, w = Pipe(duplex=False)
2605           readers.append(r)
2606           p = Process(target=foo, args=(w,))
2607           p.start()
2608           # We close the writable end of the pipe now to be sure that
2609           # p is the only process which owns a handle for it.  This
2610           # ensures that when p closes its handle for the writable end,
2611           # wait() will promptly report the readable end as being ready.
2612           w.close()
2613
2614       while readers:
2615           for r in wait(readers):
2616               try:
2617                   msg = r.recv()
2618               except EOFError:
2619                   readers.remove(r)
2620               else:
2621                   print(msg)
2622
2623
2624.. _multiprocessing-address-formats:
2625
2626Address Formats
2627>>>>>>>>>>>>>>>
2628
2629* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
2630  *hostname* is a string and *port* is an integer.
2631
2632* An ``'AF_UNIX'`` address is a string representing a filename on the
2633  filesystem.
2634
2635* An ``'AF_PIPE'`` address is a string of the form
2636  :samp:`r'\\\\\\.\\pipe\\\\{PipeName}'`.  To use :func:`Client` to connect to a named
2637  pipe on a remote computer called *ServerName* one should use an address of the
2638  form :samp:`r'\\\\\\\\{ServerName}\\pipe\\\\{PipeName}'` instead.
2639
2640Note that any string beginning with two backslashes is assumed by default to be
2641an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
2642
2643
2644.. _multiprocessing-auth-keys:
2645
2646Authentication keys
2647~~~~~~~~~~~~~~~~~~~
2648
2649When one uses :meth:`Connection.recv <Connection.recv>`, the
2650data received is automatically
2651unpickled. Unfortunately unpickling data from an untrusted source is a security
2652risk. Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
2653to provide digest authentication.
2654
2655An authentication key is a byte string which can be thought of as a
2656password: once a connection is established both ends will demand proof
2657that the other knows the authentication key.  (Demonstrating that both
2658ends are using the same key does **not** involve sending the key over
2659the connection.)
2660
2661If authentication is requested but no authentication key is specified then the
2662return value of ``current_process().authkey`` is used (see
2663:class:`~multiprocessing.Process`).  This value will be automatically inherited by
2664any :class:`~multiprocessing.Process` object that the current process creates.
2665This means that (by default) all processes of a multi-process program will share
2666a single authentication key which can be used when setting up connections
2667between themselves.
2668
2669Suitable authentication keys can also be generated by using :func:`os.urandom`.
2670
2671
2672Logging
2673~~~~~~~
2674
2675Some support for logging is available.  Note, however, that the :mod:`logging`
2676package does not use process shared locks so it is possible (depending on the
2677handler type) for messages from different processes to get mixed up.
2678
2679.. currentmodule:: multiprocessing
2680.. function:: get_logger()
2681
2682   Returns the logger used by :mod:`multiprocessing`.  If necessary, a new one
2683   will be created.
2684
2685   When first created the logger has level :data:`logging.NOTSET` and no
2686   default handler. Messages sent to this logger will not by default propagate
2687   to the root logger.
2688
2689   Note that on Windows child processes will only inherit the level of the
2690   parent process's logger -- any other customization of the logger will not be
2691   inherited.
2692
2693.. currentmodule:: multiprocessing
2694.. function:: log_to_stderr(level=None)
2695
2696   This function performs a call to :func:`get_logger` but in addition to
2697   returning the logger created by get_logger, it adds a handler which sends
2698   output to :data:`sys.stderr` using format
2699   ``'[%(levelname)s/%(processName)s] %(message)s'``.
2700   You can modify ``levelname`` of the logger by passing a ``level`` argument.
2701
2702Below is an example session with logging turned on::
2703
2704    >>> import multiprocessing, logging
2705    >>> logger = multiprocessing.log_to_stderr()
2706    >>> logger.setLevel(logging.INFO)
2707    >>> logger.warning('doomed')
2708    [WARNING/MainProcess] doomed
2709    >>> m = multiprocessing.Manager()
2710    [INFO/SyncManager-...] child process calling self.run()
2711    [INFO/SyncManager-...] created temp directory /.../pymp-...
2712    [INFO/SyncManager-...] manager serving at '/.../listener-...'
2713    >>> del m
2714    [INFO/MainProcess] sending shutdown message to manager
2715    [INFO/SyncManager-...] manager exiting with exitcode 0
2716
2717For a full table of logging levels, see the :mod:`logging` module.
2718
2719
2720The :mod:`multiprocessing.dummy` module
2721~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2722
2723.. module:: multiprocessing.dummy
2724   :synopsis: Dumb wrapper around threading.
2725
2726:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2727no more than a wrapper around the :mod:`threading` module.
2728
2729.. currentmodule:: multiprocessing.pool
2730
2731In particular, the ``Pool`` function provided by :mod:`multiprocessing.dummy`
2732returns an instance of :class:`ThreadPool`, which is a subclass of
2733:class:`Pool` that supports all the same method calls but uses a pool of
2734worker threads rather than worker processes.
2735
2736
2737.. class:: ThreadPool([processes[, initializer[, initargs]]])
2738
2739   A thread pool object which controls a pool of worker threads to which jobs
2740   can be submitted.  :class:`ThreadPool` instances are fully interface
2741   compatible with :class:`Pool` instances, and their resources must also be
2742   properly managed, either by using the pool as a context manager or by
2743   calling :meth:`~multiprocessing.pool.Pool.close` and
2744   :meth:`~multiprocessing.pool.Pool.terminate` manually.
2745
2746   *processes* is the number of worker threads to use.  If *processes* is
2747   ``None`` then the number returned by :func:`os.cpu_count` is used.
2748
2749   If *initializer* is not ``None`` then each worker process will call
2750   ``initializer(*initargs)`` when it starts.
2751
2752   Unlike :class:`Pool`, *maxtasksperchild* and *context* cannot be provided.
2753
2754    .. note::
2755
2756        A :class:`ThreadPool` shares the same interface as :class:`Pool`, which
2757        is designed around a pool of processes and predates the introduction of
2758        the :class:`concurrent.futures` module.  As such, it inherits some
2759        operations that don't make sense for a pool backed by threads, and it
2760        has its own type for representing the status of asynchronous jobs,
2761        :class:`AsyncResult`, that is not understood by any other libraries.
2762
2763        Users should generally prefer to use
2764        :class:`concurrent.futures.ThreadPoolExecutor`, which has a simpler
2765        interface that was designed around threads from the start, and which
2766        returns :class:`concurrent.futures.Future` instances that are
2767        compatible with many other libraries, including :mod:`asyncio`.
2768
2769
2770.. _multiprocessing-programming:
2771
2772Programming guidelines
2773----------------------
2774
2775There are certain guidelines and idioms which should be adhered to when using
2776:mod:`multiprocessing`.
2777
2778
2779All start methods
2780~~~~~~~~~~~~~~~~~
2781
2782The following applies to all start methods.
2783
2784Avoid shared state
2785
2786    As far as possible one should try to avoid shifting large amounts of data
2787    between processes.
2788
2789    It is probably best to stick to using queues or pipes for communication
2790    between processes rather than using the lower level synchronization
2791    primitives.
2792
2793Picklability
2794
2795    Ensure that the arguments to the methods of proxies are picklable.
2796
2797Thread safety of proxies
2798
2799    Do not use a proxy object from more than one thread unless you protect it
2800    with a lock.
2801
2802    (There is never a problem with different processes using the *same* proxy.)
2803
2804Joining zombie processes
2805
2806    On Unix when a process finishes but has not been joined it becomes a zombie.
2807    There should never be very many because each time a new process starts (or
2808    :func:`~multiprocessing.active_children` is called) all completed processes
2809    which have not yet been joined will be joined.  Also calling a finished
2810    process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will
2811    join the process.  Even so it is probably good
2812    practice to explicitly join all the processes that you start.
2813
2814Better to inherit than pickle/unpickle
2815
2816    When using the *spawn* or *forkserver* start methods many types
2817    from :mod:`multiprocessing` need to be picklable so that child
2818    processes can use them.  However, one should generally avoid
2819    sending shared objects to other processes using pipes or queues.
2820    Instead you should arrange the program so that a process which
2821    needs access to a shared resource created elsewhere can inherit it
2822    from an ancestor process.
2823
2824Avoid terminating processes
2825
2826    Using the :meth:`Process.terminate <multiprocessing.Process.terminate>`
2827    method to stop a process is liable to
2828    cause any shared resources (such as locks, semaphores, pipes and queues)
2829    currently being used by the process to become broken or unavailable to other
2830    processes.
2831
2832    Therefore it is probably best to only consider using
2833    :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes
2834    which never use any shared resources.
2835
2836Joining processes that use queues
2837
2838    Bear in mind that a process that has put items in a queue will wait before
2839    terminating until all the buffered items are fed by the "feeder" thread to
2840    the underlying pipe.  (The child process can call the
2841    :meth:`Queue.cancel_join_thread <multiprocessing.Queue.cancel_join_thread>`
2842    method of the queue to avoid this behaviour.)
2843
2844    This means that whenever you use a queue you need to make sure that all
2845    items which have been put on the queue will eventually be removed before the
2846    process is joined.  Otherwise you cannot be sure that processes which have
2847    put items on the queue will terminate.  Remember also that non-daemonic
2848    processes will be joined automatically.
2849
2850    An example which will deadlock is the following::
2851
2852        from multiprocessing import Process, Queue
2853
2854        def f(q):
2855            q.put('X' * 1000000)
2856
2857        if __name__ == '__main__':
2858            queue = Queue()
2859            p = Process(target=f, args=(queue,))
2860            p.start()
2861            p.join()                    # this deadlocks
2862            obj = queue.get()
2863
2864    A fix here would be to swap the last two lines (or simply remove the
2865    ``p.join()`` line).
2866
2867Explicitly pass resources to child processes
2868
2869    On Unix using the *fork* start method, a child process can make
2870    use of a shared resource created in a parent process using a
2871    global resource.  However, it is better to pass the object as an
2872    argument to the constructor for the child process.
2873
2874    Apart from making the code (potentially) compatible with Windows
2875    and the other start methods this also ensures that as long as the
2876    child process is still alive the object will not be garbage
2877    collected in the parent process.  This might be important if some
2878    resource is freed when the object is garbage collected in the
2879    parent process.
2880
2881    So for instance ::
2882
2883        from multiprocessing import Process, Lock
2884
2885        def f():
2886            ... do something using "lock" ...
2887
2888        if __name__ == '__main__':
2889            lock = Lock()
2890            for i in range(10):
2891                Process(target=f).start()
2892
2893    should be rewritten as ::
2894
2895        from multiprocessing import Process, Lock
2896
2897        def f(l):
2898            ... do something using "l" ...
2899
2900        if __name__ == '__main__':
2901            lock = Lock()
2902            for i in range(10):
2903                Process(target=f, args=(lock,)).start()
2904
2905Beware of replacing :data:`sys.stdin` with a "file like object"
2906
2907    :mod:`multiprocessing` originally unconditionally called::
2908
2909        os.close(sys.stdin.fileno())
2910
2911    in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2912    in issues with processes-in-processes. This has been changed to::
2913
2914        sys.stdin.close()
2915        sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)
2916
2917    Which solves the fundamental issue of processes colliding with each other
2918    resulting in a bad file descriptor error, but introduces a potential danger
2919    to applications which replace :func:`sys.stdin` with a "file-like object"
2920    with output buffering.  This danger is that if multiple processes call
2921    :meth:`~io.IOBase.close()` on this file-like object, it could result in the same
2922    data being flushed to the object multiple times, resulting in corruption.
2923
2924    If you write a file-like object and implement your own caching, you can
2925    make it fork-safe by storing the pid whenever you append to the cache,
2926    and discarding the cache when the pid changes. For example::
2927
2928       @property
2929       def cache(self):
2930           pid = os.getpid()
2931           if pid != self._pid:
2932               self._pid = pid
2933               self._cache = []
2934           return self._cache
2935
2936    For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2937
2938The *spawn* and *forkserver* start methods
2939~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2940
2941There are a few extra restriction which don't apply to the *fork*
2942start method.
2943
2944More picklability
2945
2946    Ensure that all arguments to :meth:`Process.__init__` are picklable.
2947    Also, if you subclass :class:`~multiprocessing.Process` then make sure that
2948    instances will be picklable when the :meth:`Process.start
2949    <multiprocessing.Process.start>` method is called.
2950
2951Global variables
2952
2953    Bear in mind that if code run in a child process tries to access a global
2954    variable, then the value it sees (if any) may not be the same as the value
2955    in the parent process at the time that :meth:`Process.start
2956    <multiprocessing.Process.start>` was called.
2957
2958    However, global variables which are just module level constants cause no
2959    problems.
2960
2961.. _multiprocessing-safe-main-import:
2962
2963Safe importing of main module
2964
2965    Make sure that the main module can be safely imported by a new Python
2966    interpreter without causing unintended side effects (such a starting a new
2967    process).
2968
2969    For example, using the *spawn* or *forkserver* start method
2970    running the following module would fail with a
2971    :exc:`RuntimeError`::
2972
2973        from multiprocessing import Process
2974
2975        def foo():
2976            print('hello')
2977
2978        p = Process(target=foo)
2979        p.start()
2980
2981    Instead one should protect the "entry point" of the program by using ``if
2982    __name__ == '__main__':`` as follows::
2983
2984       from multiprocessing import Process, freeze_support, set_start_method
2985
2986       def foo():
2987           print('hello')
2988
2989       if __name__ == '__main__':
2990           freeze_support()
2991           set_start_method('spawn')
2992           p = Process(target=foo)
2993           p.start()
2994
2995    (The ``freeze_support()`` line can be omitted if the program will be run
2996    normally instead of frozen.)
2997
2998    This allows the newly spawned Python interpreter to safely import the module
2999    and then run the module's ``foo()`` function.
3000
3001    Similar restrictions apply if a pool or manager is created in the main
3002    module.
3003
3004
3005.. _multiprocessing-examples:
3006
3007Examples
3008--------
3009
3010Demonstration of how to create and use customized managers and proxies:
3011
3012.. literalinclude:: ../includes/mp_newtype.py
3013   :language: python3
3014
3015
3016Using :class:`~multiprocessing.pool.Pool`:
3017
3018.. literalinclude:: ../includes/mp_pool.py
3019   :language: python3
3020
3021
3022An example showing how to use queues to feed tasks to a collection of worker
3023processes and collect the results:
3024
3025.. literalinclude:: ../includes/mp_workers.py
3026