The asynciojobs API¶
The Scheduler classes¶
The PureScheduler class is a set of AbstractJobs, that together with their required relationship, form an execution graph.
- class asynciojobs.purescheduler.PureScheduler(*jobs_or_sequences, jobs_window=None, timeout=None, shutdown_timeout=1, watch=None, verbose=False)[source]¶
A PureScheduler instance is made of a set of AbstractJob objects.
The purpose of the scheduler object is to orchestrate an execution of these jobs that respects the required relationships, until they are all complete. It starts with the ones that have no requirement, and then triggers the other ones as their requirement jobs complete.
For this reason, the dependency/requirements graph must be acyclic.
Optionnally a scheduler orchestration can be confined to a finite number of concurrent jobs (see the
jobs_window
parameter below).It is also possible to define a
timeout
attribute on the object, that will limit the execution time of a scheduler.Running an AbstractJob means executing its
co_run()
method, which must be a coroutineThe result of a job’s
co_run()
is NOT taken into account, as long as it returns without raising an exception. If it does raise an exception, overall execution is aborted iff the job is critical. In all cases, the result and/or exception of each individual job can be inspected and retrieved individually at any time, including of course once the orchestration is complete.- Parameters:
jobs_or_sequences – instances of AbstractJob or Sequence. The order in which they are mentioned is irrelevant.
jobs_window – is an integer that specifies how many jobs can be run simultaneously. None or 0 means no limit.
timeout – can be an int or float and is expressed in seconds; it applies to the overall orchestration of that scheduler, not to any individual job. Can be also
None
, which means no timeout.shutdown_timeout – same meaning as
timeout
, but for the shutdown phase.watch – if the caller passes a
Watch
instance, it is used in debugging messages to show the time elapsed wrt that watch, instead of using the wall clock.verbose (bool) – flag that says if execution should be verbose.
Examples
Creating an empty scheduler:
s = Scheduler()
A scheduler with a single job:
s = Scheduler(Job(asyncio.sleep(1)))
A scheduler with 2 jobs in parallel:
s = Scheduler(Job(asyncio.sleep(1)), Job(asyncio.sleep(2)))
A scheduler with 2 jobs in sequence:
s = Scheduler( Sequence( Job(asyncio.sleep(1)), Job(asyncio.sleep(2)) ))
In this document, the
Schedulable
name refers to a type hint, that encompasses instances of either the AbstractJob or Sequence classes.- add(job)[source]¶
Adds a single
Schedulable
object; this method name is inspired from plain pythonset.add()
- Parameters:
job – a single
Schedulable
object.- Returns:
- the job object, for convenience, as it can be needed to build
requirements later on in the script.
- Return type:
job
- bypass_and_remove(job)[source]¶
job is assumed to be part of the scheduler (and ValueError is raised otherwise)
this method will remove job from the scheduler, while preserving the logic; for achieving this, all the requirement links will be redirected to bypass that job
more formally, for each job upstream that job requires, the algorithm will create requirement links from all downstream jobs that require job
for now, nesting is not supported, that is to say, the job needs to be an actual member of this scheduler, and not of a nested scheduler within; jobs in a Sequence work fine though
- Return type:
None
- check_cycles()[source]¶
Performs a minimal sanity check. The purpose of this is primarily to check for cycles, and/or missing starting points.
It’s not embedded in
co_run()
because it is not strictly necessary, but it is safer to call this before running the scheduler if one wants to double-check the jobs dependency graph early on.It might also help to have a sanitized scheduler, but here again this is up to the caller.
- Returns:
True if the topology is fine
- Return type:
bool
- async co_run()[source]¶
The primary entry point for running a scheduler. See also
run()
for a synchronous wrapper around this coroutine.Runs member jobs (that is, schedule their co_run() method) in an order that satisfies their required relationsship.
Proceeds to the end no matter what, except if either:
one critical job raises an exception, or
a timeout occurs.
- Returns:
True if none of these 2 conditions occur, False otherwise.
- Return type:
bool
Jobs marked as
forever
are not waited for.No automatic shutdown is performed, user needs to explicitly call
co_shutdown()
orshutdown()
.
- async co_shutdown()[source]¶
Shut down the scheduler, by sending the
co_shutdown()
method to all the jobs, possibly nested.Within nested schedulers, a job receives the shutdown event when its enclosing scheduler terminates, and not at the end of the outermost scheduler.
Also note that all job instances receive the ‘co_shutdown()’ method, even the ones that have not yet started; it is up to the co_shutdown() method to triage the jobs according to their life cycle status - see
is_running()
and similar.This mechanism should be used only for minimal housekeeping only, it is recommended that intrusive cleanup be made part of separate, explicit methods.
- Note:
typically in apssh for example, several jobs sharing the same ssh
connection need to arrange for that connection to be kept alive across an entire scheduler lifespan, and closed later on. Historically there had been an attempt to deal with this automagically, through the present shutdown mechanism. However, this turned out to be the wrong choice, as the choice of closing connections needs to be left to the user. Additionally, with nested schedulers, this can become pretty awkward. Closing ssh connections is now to be achieved explicitly through a call to a specific apssh function.
- Returns:
- True if all the
co_shutdown()
methods attached to the jobs in the scheduler complete withinshutdown_timeout
, which is an attribute of the scheduler. If theshutdown_timeout
attribute on this object isNone
, no timeout is implemented.
- Return type:
bool
Notes
There is probably space for a lot of improvement here xxx:
behaviour is unspecified if any of the co_shutdown() methods raises an exception;
right now, a subscheduler that sees a timeout expiration does not cause the overall co_shutdown() to return
False
, which is arguable;another possible weakness in current implementation is that it does not support to shutdown a scheduler that is still running.
- debrief(details=False)[source]¶
Designed for schedulers that have failed to orchestrate.
Print a complete report, that includes list() but also gives more stats and data.
- dot_format()[source]¶
Creates a graph that depicts the jobs and their requires relationships, in DOT Format.
- Returns:
str: a representation of the graph in DOT Format underlying this scheduler.
See graphviz’s documentation, together with its Python wrapper library, for more information on the format and available tools.
See also Wikipedia on DOT for a list of tools that support the
dot
format.As a general rule,
asynciojobs
has a support for producing DOT Format but stops short of actually importinggraphviz
that can be cumbersome to install, but for the notable exception of the :meth:graph() method. See that method for how to convert aPureScheduler
instance into a nativeDiGraph
instance.DOT_%28graph_description_language%29
- entry_jobs()[source]¶
A generator that yields all jobs that have no requirement.
Exemples:
List all entry points:
for job in scheduler.entry_points(): print(job)
- exit_jobs(*, discard_forever=True, compute_backlinks=True)[source]¶
A generator that yields all jobs that are not a requirement to another job; it is thus in some sense the reverse of
entry_points()
.- Parameters:
discard_forever – if True, jobs marked as forever are skipped; forever jobs often have no successors, but are seldom of interest when calling this method.
compute_backlinks – for this method to work properly, it is necessary to compute backlinks, an internal structure that holds the opposite of the required relationship. Passing False here allows to skip that stage, when that relationship is known to be up to date already.
- export_as_dotfile(filename)[source]¶
This method does not require
graphviz
to be installed, it writes a file in dot format for post-processing with e.g. graphviz’sdot
utility. It is a simple wrapper arounddot_format()
.- Parameters:
filename – where to store the result.
- Returns:
a message that can be printed for information, like e.g.
"(Over)wrote foo.dot"
- Return type:
str
See also the
graph()
method that serves a similar purpose but natively as agraphviz
object.As an example of post-processing, a PNG image can be then obtained from that dotfile with e.g.:
dot -Tpng foo.dot -o foo.png
- export_as_graphic(filename, suffix)[source]¶
Convenience wrapper that creates a graphic output file. Like
graph()
, it requires thegraphviz
package to be installed. See alsoexport_as_pngfile()
, which is a shortcut to using this method with suffix=”png”- Parameters:
filename – output filename, without the extension
suffix – one of the suffixes supported by dot -T, e.g. png or svg
- Returns:
created file name
Notes
This actually uses the binary dot program.
A file named as the output but with a
.dot
extension is created as an artefact by this method.
- export_as_pngfile(filename)[source]¶
Shortcut to
export_as_graphic()
with suffix=”png”- Parameters:
filename – output filename, without the
.png
extension- Returns:
created file name
- export_as_svgfile(filename)[source]¶
Shortcut to
export_as_graphic()
with suffix=”svg”- Parameters:
filename – output filename, without the
.svg
extension- Returns:
created file name
- failed_critical()[source]¶
- Returns:
returns True if and only if
co_run()
has failed because a critical job has raised an exception.- Return type:
bool
- failed_time_out()[source]¶
- Returns:
returns True if and only if
co_run()
has failed because of a time out.- Return type:
bool
- graph()[source]¶
- Returns:
a graph
- Return type:
graphviz.Digraph
This method serves the same purpose as
export_to_dotfile()
, but it natively returns a graph instance. For that reason, its usage requires the installation of thegraphviz
package.This method is typically useful in a Jupyter notebook, so as to visualize a scheduler in graph format - see http://graphviz.readthedocs.io/en/stable/manual.html#jupyter-notebooks for how this works.
The dependency from
asynciojobs
tographviz
is limited to this method andexport_as_pngfile()
, as it these are the only places that need it, and as installinggraphviz
can be cumbersome.For example, on MacOS I had to do both:
brew install graphviz # for the C/C++ binary stuff pip3 install graphviz # for the python bindings
- iterate_jobs(scan_schedulers=False)[source]¶
A generator that scans all jobs and subjobs
- Parameters:
scan_schedulers – if set, nested schedulers are ignored, only actual jobs are reported; otherwise, nested schedulers are listed as well.
- keep_only(remains)[source]¶
modifies the scheduler, and keep only the jobs mentioned in remains in the scheduler
any job not belonging in self are ignored
- Return type:
None
- keep_only_between(*, starts=None, ends=None, keep_starts=True, keep_ends=True)[source]¶
allows to select a subset of the scheduler, which is modified in place
the algorithm works as follows: we preserve the jobs that are reachable (successors) of (any of) the starts vertices AND that can reach (predecessors) (any of) the ends vertices
of course if starts is empty, then only the ends criteria is used (since otherwise we would retain nothing); and same if ends is empty
- Return type:
None
- list(details=False)[source]¶
Prints a complete list of jobs in topological order, with their status summarized with a few signs. See the README for examples and a legend.
Beware that this might raise an exception if
check_cycles()
would returnFalse
, i.e. if the graph is not acyclic.
- list_safe()[source]¶
Print jobs in no specific order, the advantage being that it works even if scheduler is broken wrt
check_cycles()
. On the other hand, this method is not able to list requirements.
- orchestrate(*args, **kwds)¶
A synchroneous wrapper around
co_run()
, please refer to that link for details on parameters and return value.Also, the canonical name for this is
run()
but for historical reasons you can also useorchestrate()
as an alias forrun()
.
- predecessors(*starts)[source]¶
returns a set of all the jobs in this scheduler that any of the starts job requires
- Return type:
set
[AbstractJob
]
- predecessors_upstream(*starts)[source]¶
returns a set of all the jobs that job depends on, either immediately or further up the execution path
- Return type:
set
[AbstractJob
]
- remove(job)[source]¶
Removes a single
Schedulable
object; this method name is inspired from plain pythonset.remove()
- Parameters:
job – a single
Schedulable
object.- Raises:
KeyError – if job not in scheduler.
- Returns:
the scheduler object, for cascading insertions if needed.
- Return type:
self
- run(*args, **kwds)[source]¶
A synchroneous wrapper around
co_run()
, please refer to that link for details on parameters and return value.Also, the canonical name for this is
run()
but for historical reasons you can also useorchestrate()
as an alias forrun()
.
- sanitize(verbose=None)[source]¶
This method ensures that the requirements relationship is closed within the scheduler. In other words, it removes any requirement attached to a job in this scheduler, but that is not itself part of the scheduler.
This can come in handy in some scheduler whose composition depends on external conditions.
In any case it is crucial that this property holds for
co_run()
to perform properly.- Parameters:
verbose – if not None, defines verbosity for this operation. Otherwise, the object’s
verbose
attribute is used. In verbose mode, jobs that are changed, i.e. that have requirement(s) dropped because they are not part of the same scheduler, are listed, together with their container scheduler.- Returns:
- returns True if scheduler object was fine,
and False if at least one removal was needed.
- Return type:
bool
- shutdown()[source]¶
A synchroneous wrapper around
co_shutdown()
.- Returns:
True if everything went well, False otherwise; see
co_shutdown()
for details.- Return type:
bool
- stats()[source]¶
Returns a string like e.g.
2D + 3R + 4I = 9
meaning that the scheduler currently has 2 done, 3 running an 4 idle jobs
- successors(*starts, compute_backlinks=True)[source]¶
returns a set of all the jobs in this scheduler that require any of the starts job
- Return type:
Iterator
[AbstractJob
]
- successors_downstream(*starts, compute_backlinks=True)[source]¶
return a set of all the jobs that depend on job, either immediately or further down the execution path
- Return type:
set
[AbstractJob
]
- topological_order()[source]¶
A generator function that scans the graph in topological order, in the same order as the orchestration, i.e. starting from jobs that have no dependencies, and moving forward.
Beware that this is not a separate iterator, so it can’t be nested, which in practice should not be a problem.
Examples
Assuming all jobs have a
label
attribute, print them in the “right” order:for job in scheduler.topological_order(): print(job.label)
The Scheduler
class makes it easier to nest scheduler objects.
- class asynciojobs.scheduler.Scheduler(*jobs_or_sequences, jobs_window=None, timeout=None, shutdown_timeout=1, watch=None, verbose=False, **kwds)[source]¶
The
Scheduler
class is a mixin of the twoPureScheduler
andAbstractJob
classes.As such it can be used to create nested schedulers, since it is a scheduler that can contain jobs, and at the same time it is a job, and so it can be included in a scheduler.
- Parameters:
jobs_or_sequences – passed to
PureScheduler
, allows to add these jobs inside of the newly-created scheduler;jobs_window – passed to
PureScheduler
;timeout – passed to
PureScheduler
;shutdown_timeout – passed to
PureScheduler
;watch (Watch) – passed to
PureScheduler
;verbose (bool) – passed to
PureScheduler
;kwds – all other named arguments are sent to the
AbstractJob
constructor.
Example
Here’s how to create a very simple scheduler with an embedded sub-scheduler; the whole result is equivalent to a simple 4-steps sequence:
main = Scheduler( Sequence( Job(aprint("begin", duration=0.25)), Scheduler( Sequence( Job(aprint("middle-begin", duration = 0.25)), Job(aprint("middle-end", duration = 0.25)), ) ), Job(aprint("end", duration=0.25)), ) main.run()
Notes
There can be several good reasons for using nested schedulers:
the scope of a
window
object applies to a scheduler, so a nested scheduler is a means to apply windoing on a specific set of jobs;likewise the
timeout
attribute only applies to the run for the whole scheduler;you can use
forever
jobs that will be terminated earlier than the end of the global scheduler;strictly speaking, the outermost instance in this example could be an instance of
PureScheduler
, but in practice it is simpler to always create instances ofScheduler
.
Using an intermediate-level scheduler can in some case help alleviate or solve such issues.
- check_cycles()[source]¶
Supersedes
check_cycles()
to account for nested schedulers.- Returns:
- True if this scheduler, and all its nested schedulers
at any depth, has no cycle and can be safely scheduled.
- Return type:
bool
- async co_run()[source]¶
Supersedes the
co_run()
method in order to account for critical schedulers.Scheduler being a subclass of AbstractJob, we need to account for the possibility that a scheduler is defined as
critical
.If the inherited
co_run()
method fails because of an exception of a timeout, a critical Scheduler will trigger an exception, instead of returningFalse
:if orchestration failed because an internal job has raised an exception, raise that exception;
if it failed because of a timeout, raise
TimeoutError
- Returns:
True
if everything went well;False
for non-critical schedulers that go south.
- Return type:
bool
- Raises:
TimeoutError – for critical schedulers that do not complete in time,
Exception – for a critical scheduler that has a critical job that triggers an exception, in which case it bubbles up.
Job-like classes¶
This module defines AbstractJob
, that is the base class
for all the jobs in a Scheduler, as well as a basic concrete subclass
Job
for creating a job from a coroutine.
It also defines a couple of simple job classes.
- class asynciojobs.job.AbstractJob(*, forever=False, critical=True, label=None, required=None, scheduler=None)[source]¶
AbstractJob is a virtual class:
it offers some very basic graph-related features to model requirements a la Makefile;
its subclasses are expected to implement a co_run() and a co_shutdown() methods that specify the actual behaviour of the job, as coroutines.
AbstractJob is mostly a companion class to the
PureScheduler
class, that triggers these co_* methods.
Life Cycle: AbstractJob is also aware of a common life cycle for all jobs, which can be summarized as follows:
idle → scheduled → running → done
In un-windowed schedulers, there is no distinction between scheduled and running. In other words, in this case a job goes directly from idle to running.a
On the other hand, in windowed orchestrations - see the
jobs_window
attribute toPureScheduler()
- a job can be scheduled but not yet running, because it is waiting for a slot in the global window.- Parameters:
forever (bool) – if set, means the job is not returning at all and runs forever; in this case
Scheduler.orchestrate()
will not wait for that job, and will terminate it once all the regular - i.e. not-forever - jobs are done.critical (bool) – if set, this flag indicates that any exception raised during the execution of that job should result in the scheduler aborting its run immediately. The default behaviour is to let the scheduler finish its jobs, at which point the jobs can be inspected for exceptions or results.
required – this can be one, or a collection of, jobs that will make the job’s requirements; requirements can be added later on as well.
label (str) –
for convenience mostly, allows to specify the way that particular job should be displayed by the scheduler, either in textual form by
Scheduler.list()
, or in graphical form byScheduler.graph()
. See alsotext_label()
andgraph_label()
for how this is used.As far as labelling, each subclass of
AbstractJob
implements a default labelling scheme, so it is not mandatory to set a specific label on each job instance, however it is sometimes useful.Labels must not be confused with details, see
details()
scheduler – this can be an instance of a
PureScheduler
object, in which the newly created job instance is immediately added. A job instance can also be inserted in a scheduler instance later on.
Note: a Job instance must only be added in one Scheduler instance at most - be aware that the code makes no control on this property, but be aware that odd behaviours can be observed if it is not fulfilled.
- details()[source]¶
An optional method to implement on concrete job classes; if it returns a non None value, these additional details about that job will get printed by
asynciojobs.purescheduler.PureScheduler.list()
andasynciojobs.purescheduler.PureScheduler.debrief()
when called with details=True.
- dot_style()[source]¶
This method computes the DOT attributes which are used to style boxes according to critical / forever / and similar.
Legend is quite simply that:
schedulers have sharp angles, while other jobs have rounded corners,
critical jobs have a colored and thick border, and
forever jobs have a dashed border.
- Returns:
a dict-like mapping that sets DOT attributes for that job.
- Return type:
DotStyle
- graph_label()[source]¶
This method is intended to be redefined by daughter classes.
- Returns:
a string used by the Scheduler methods that produce a graph, such as
graph()
andexport_as_dotfile()
.
Because of the way graphs are presented, it can have contain “newline” characters, that will render as line breaks in the output graph.
If this method is not defined on a concrete class, then the
text_label()
method is used instead.
- is_done()[source]¶
- Returns:
True
if the job has completed.- Return type:
bool
If this method returns
True
, it implies that is_scheduled() and is_running() would also returnTrue
at that time.
- is_idle()[source]¶
- Returns:
True
if the job has not been scheduled already, which in other words means that at least one of its requirements is not fulfilled.- Return type:
bool
Implies not is_scheduled(), and so a fortiori not is_running and not is_done().
- is_running()[source]¶
- Returns:
once a job starts, it tries to get a slot in the windowing sytem. This method returns
True
if the job has received the green light from the windowing system. Implies is_scheduled().- Return type:
bool
- is_scheduled()[source]¶
- Returns:
True
if the job has been scheduled.- Return type:
bool
If True, it means that the job’s requirements are met, and it has proceeded to the windowing system; equivalent to not is_idle().
- raised_exception()[source]¶
- Returns:
an exception if the job has completed by raising an exception, and None otherwise.
- repr_id()[source]¶
- Returns:
the job’s id inside the scheduler, or ‘??’ if that was not yet set by the scheduler.
- Return type:
str
- repr_main()[source]¶
- Returns:
- standardized body of the object’s repr,
like e.g.
<SshJob `my command`>
.
- Return type:
str
- repr_result()[source]¶
- Returns:
- standardized repr’s part that shows
the result or exception of the job.
- Return type:
str
- repr_short()[source]¶
- Returns:
- a 4 characters string (in fact 7 with interspaces)
that summarizes the 4 dimensions of the job, that is to say
- Return type:
str
its point in the lifecycle (idle → scheduled → running → done)
is it declared as forever
is it declared as critical
did it trigger an exception
- requires(*requirements, remove=False)[source]¶
- Parameters:
requirements – an iterable of AbstractJob instances that are added to the requirements.
remove (bool) – if set, the requirement are dropped rather than added.
- Raises:
KeyError – when trying to remove dependencies that were not present.
- Returns:
for chaining
- Return type:
self
For convenience, any nested structure made of job instances can be provided, and if None objects are found, they are silently ignored. For example, with j{1,2,3,4} being jobs or sequences, all the following calls are legitimate:
j1.requires(None)
j1.requires([None])
j1.requires((None,))
j1.requires(j2)
j1.requires(j2, j3)
j1.requires([j2, j3])
j1.requires(j2, [j3, j4])
j1.requires((j2, j3))
j1.requires(([j2], [[[j3]]]))
Any of the above with
remove=True
.
For dropping dependencies instead of adding them, use
remove=True
- result()[source]¶
- Returns:
When this job is completed and has not raised an exception, this method lets you retrieve the job’s result. i.e. the value returned by its co_run() method.
- standalone_run()[source]¶
A convenience helper that just runs this one job on its own.
Mostly useful for debugging the internals of that job, e.g. for checking for gross mistakes and other exceptions.
- text_label()[source]¶
This method is intended to be redefined by daughter classes.
- Returns:
a one-line string that describes this job.
This representation for the job is used by the Scheduler object through its
list()
anddebrief()
methods, i.e. when a scheduler is printed out in textual format.The overall logic is to always use the instance’s
label
attribute if set, or to use this method otherwise. If none of this returns anything useful, the textual label used isNOLABEL
.
- class asynciojobs.job.Job(corun, *args, coshutdown=None, **kwds)[source]¶
The simplest concrete job class, for building an instance of AbstractJob from of a python coroutine.
- Parameters:
corun – a coroutine to be evaluated when the job runs
coshutdown – an optional coroutine to be evaluated when the scheduler is done running
scheduler – passed to
AbstractJob
required – passed to
AbstractJob
label – passed to
AbstractJob
Example
To create a job that prints a message and waits for a fixed delay:
async def aprint(message, delay): print(message) await asyncio.sleep(delay) j = Job(aprint("Welcome - idling for 3 seconds", 3))
- async co_run()[source]¶
Implementation of the method expected by
AbstractJob
- async co_shutdown()[source]¶
Implementation of the method expected by
AbstractJob
, or more exactly byasynciojobs.purescheduler.PureScheduler.list()
- text_label()[source]¶
Implementation of the method expected by
AbstractJob
The Sequence class¶
This module defines the Sequence class, that is designed to ease the building of schedulers
- class asynciojobs.sequence.Sequence(*sequences_or_jobs, required=None, scheduler=None)[source]¶
A Sequence is an object that organizes a set of AbstratJobs in a sequence. Its main purpose is to add a single required relationship per job in the sequence, except the for first one, that instead receives as its required the sequence’s requirements.
If scheduler is passed to the sequence’s constructor, all the jobs passed to the sequence are added in that scheduler.
Sequences are not first-class citizens, in the sense that the scheduler primarily ignores these objects, only the jobs inside the sequence matter.
However a sequence can be used essentially in every place where a job could be, either being inserted in an scheduler, added as a requirement, and it can have requirements too.
- Parameters:
sequences_or_jobs – each must be a
Schedulable
object, the order of course is important hererequired – one, or a collection of,
Schedulable
objects that will become the requirements for the first job in the sequencescheduler – if provided, the jobs in the sequence will be inserted in that scheduler.
Notes on ordering¶
Schedulers and jobs requirements are essentially sets of jobs, and from a semantic point of view, order does not matter.
However for debugging/cosmetic reasons, keeping track of creation order can be convenient.
So using OrderedSet
looks like a good idea;
but it turns out that on some distros like fedora,
installing OrderedSet
can be a pain, as it involves recompiling C code,
which in turn pulls in a great deal of dependencies.
For this reason, we use OrderedSet
only if available,
and resort to regular sets otherwise.
On macos or ubuntu, fortunately, this can be simply achieved with:
pip3 install orderedset
or alternatively with:
pip3 install asynciojobs[ordered]
Convenience classes¶
The PrintJob
class is a specialization of the
AbstractJob
class,
mostly useful for debugging, tests and tutorials.
- class asynciojobs.printjob.PrintJob(*messages, sleep=None, banner=None, scheduler=None, label=None, required=None)[source]¶
A job that just prints messages, and optionnally sleeps for some time.
- Parameters:
messages – passed to
print
as-issleep – optional, an int or float describing in seconds how long to sleep after the messages get printed
banner – optional, a fixed text printed out before the messages like e.g.
40*'='
; it won’t make it intodetails()
scheduler – passed to :class:
AbstractJob
required – passed to :class:
AbstractJob
label – passed to :class:
AbstractJob
A utility to print time and compute durations, mostly for debugging and tests.
- class asynciojobs.watch.Watch(message=None, *, show_elapsed=True, show_wall_clock=False)[source]¶
This class essentially remembers a starting point, so that durations relative to that epoch can be printed for debug instead of a plain timestamp.
- Parameters:
message (str) – used in the printed message at creation time,
show_elapsed (bool) – tells if a message with the elapsed time needs to be printed at creation time (elapsed will be 0),
show_wall_clock (bool) – same for the wall clock.
Examples
Here’s a simple use case; note that
print_wall_clock()
is a static because it is mostly useful, precisely, when you do not have aWatch
object at hand:$ python3 Python 3.6.4 (default, Mar 9 2018, 23:15:12) <snip> >>> from asynciojobs import Watch >>> import time >>> watch = Watch("hello there"); time.sleep(1); watch.print_elapsed() 000.000 hello there 001.000 >>> >>> >>> Watch.print_wall_clock() 20:48:27.782 >>>
- elapsed()[source]¶
- Returns:
number of seconds elapsed since start, formatted on 7 characters: 3 for seconds, a dot, 3 for milliseconds
- Return type:
str
- print_elapsed(suffix=' ')[source]¶
Print the elapsed time since start in format SSS.MMM + a suffix.
- Parameters:
suffix (str) – is appended to the output; to be explicit, by default no newline is added.