Submit
Path:
~
/
/
proc
/
thread-self
/
root
/
opt
/
alt
/
python35
/
lib
/
python3.5
/
site-packages
/
joblib
/
__pycache__
/
File Content:
_parallel_backends.cpython-35.pyc
,|�W�7 � @ s� d Z d d l Z d d l Z d d l Z d d l Z d d l Z d d l m Z m Z d d l m Z d d l m Z m Z d d l m Z d d l m Z e d k r� d d l m Z d d l m Z Gd d � d e e � � Z Gd d � d e � Z Gd d � d e � Z Gd d � d e � Z Gd d � d e e � Z Gd d � d e e e � Z Gd d � d e � Z Gd d � d e � Z Gd d � d e � Z d S)z, Backends for embarrassingly parallel code. � N)�ABCMeta�abstractmethod� )� format_exc)�WorkerInterrupt�TransportableException)�mp)�with_metaclass)� MemmapingPool)� ThreadPoolc @ s� e Z d Z d Z e d d � � Z e d d d � � Z d d d d � Z d d � Z d d � Z d d � Z d d � Z d d d � Z d S)�ParallelBackendBasezEHelper abc which defines all methods a ParallelBackend must implementc C s d S)a� Determine the number of jobs that can actually run in parallel n_jobs is the number of workers requested by the callers. Passing n_jobs=-1 means requesting all available workers for instance matching the number of CPU cores on the worker host(s). This method should return a guesstimate of the number of workers that can actually perform work concurrently. The primary use case is to make it possible for the caller to know in how many chunks to slice the work. In general working on larger data chunks is more efficient (less scheduling overhead and better use of CPU cache prefetching heuristics) as long as all the workers have enough work to do. N� )�self�n_jobsr r �/_parallel_backends.py�effective_n_jobs s z$ParallelBackendBase.effective_n_jobsNc C s d S)zSchedule a func to be runNr )r �func�callbackr r r �apply_async* s zParallelBackendBase.apply_asyncr c K s | | _ | j | � S)z�Reconfigure the backend and return the number of workers. This makes it possible to reuse an existing backend instance for successive independent calls to Parallel with different parameters. )�parallelr )r r r �backend_argsr r r � configure. s zParallelBackendBase.configurec C s d S)z#Shutdown the process or thread poolNr )r r r r � terminate7 s zParallelBackendBase.terminatec C s d S)z Determine the optimal batch sizer r )r r r r �compute_batch_size: s z&ParallelBackendBase.compute_batch_sizec C s d S)z1Callback indicate how long it took to run a batchNr )r � batch_size�durationr r r �batch_completed>