'\" t .\" Man page generated from reStructuredText. . . .nr rst2man-indent-level 0 . .de1 rstReportMargin \\$1 \\n[an-margin] level \\n[rst2man-indent-level] level margin: \\n[rst2man-indent\\n[rst2man-indent-level]] - \\n[rst2man-indent0] \\n[rst2man-indent1] \\n[rst2man-indent2] .. .de1 INDENT .\" .rstReportMargin pre: . RS \\$1 . nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin] . nr rst2man-indent-level +1 .\" .rstReportMargin post: .. .de UNINDENT . RE .\" indent \\n[an-margin] .\" old: \\n[rst2man-indent\\n[rst2man-indent-level]] .nr rst2man-indent-level -1 .\" new: \\n[rst2man-indent\\n[rst2man-indent-level]] .in \\n[rst2man-indent\\n[rst2man-indent-level]]u .. .TH "PYTHONPERFORMANCEBENCHMARKSUITE" "1" "Nov 27, 2024" "1.0.6" "Python Performance Benchmark Suite" .SH NAME pythonperformancebenchmarksuite \- Python Performance Benchmark Suite Documentation .sp The \fBpyperformance\fP project is intended to be an authoritative source of benchmarks for all Python implementations. The focus is on real\-world benchmarks, rather than synthetic benchmarks, using whole applications when possible. .INDENT 0.0 .IP \(bu 2 \X'tty: link http://pyperformance.readthedocs.io/'\fI\%pyperformance documentation\fP\X'tty: link' .IP \(bu 2 \X'tty: link https://github.com/python/pyperformance'\fI\%pyperformance GitHub project\fP\X'tty: link' (source code, issues) .IP \(bu 2 \X'tty: link https://pypi.python.org/pypi/pyperformance'\fI\%Download pyperformance on PyPI\fP\X'tty: link' .UNINDENT .sp pyperformance is distributed under the MIT license. .sp Documentation: .SH USAGE .SS Installation .sp Command to install pyperformance: .INDENT 0.0 .INDENT 3.5 .sp .EX python3 \-m pip install pyperformance .EE .UNINDENT .UNINDENT .sp The command installs a new \fBpyperformance\fP program. .sp If needed, \fBpyperf\fP and \fBsix\fP dependencies are installed automatically. .sp pyperformance works on Python 3.6 and newer, but it may work on Python 3.4 and 3.5. .sp At runtime, Python development files (header files) may be needed to install some dependencies like \fBdulwich_log\fP or \fBpsutil\fP, to build their C extension. Commands on Fedora to install dependencies: .INDENT 0.0 .IP \(bu 2 Python 3: \fBsudo dnf install python3\-devel\fP .IP \(bu 2 PyPy: \fBsudo dnf install pypy\-devel\fP .UNINDENT .SS Windows notes .sp On Windows, to allow pyperformance to build dependencies from source like \fBgreenlet\fP, \fBdulwich\fP or \fBpsutil\fP, if you want to use a \fBpython.exe\fP built from source, you should not use the \fBpython.exe\fP directly. Instead, you must run the little\-known command \fBPC\elayout\fP to create a filesystem layout that resembles an installed Python: .INDENT 0.0 .INDENT 3.5 .sp .EX \&.\epython.bat \-m PC.layout \-\-preset\-default \-\-copy installed \-v .EE .UNINDENT .UNINDENT .sp (Use the \fB\-\-help\fP flag for more info about \fBPC\elayout\fP\&.) .sp Now you can use the \(dqinstalled\(dq Python executable: .INDENT 0.0 .INDENT 3.5 .sp .EX installed\epython.exe \-m pip install pyperformance installed\epython.exe \-m pyperformance run ... .EE .UNINDENT .UNINDENT .sp Using an \fIactually\fP installed Python executable (e.g. via \fBpy\fP) works fine too. .SS Run benchmarks .sp Commands to compare Python 3.6 and Python 3.7 performance: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance run \-\-python=python3.6 \-o py36.json pyperformance run \-\-python=python3.7 \-o py38.json pyperformance compare py36.json py38.json .EE .UNINDENT .UNINDENT .sp Note: \fBpython3 \-m pyperformance ...\fP syntax works as well (ex: \fBpython3 \-m pyperformance run \-o py38.json\fP), but requires to install pyperformance on each tested Python version. .sp JSON files are produced by the pyperf module and so can be analyzed using pyperf commands: .INDENT 0.0 .INDENT 3.5 .sp .EX python3 \-m pyperf show py36.json python3 \-m pyperf check py36.json python3 \-m pyperf metadata py36.json python3 \-m pyperf stats py36.json python3 \-m pyperf hist py36.json python3 \-m pyperf dump py36.json (...) .EE .UNINDENT .UNINDENT .sp It\(aqs also possible to use pyperf to compare results of two JSON files: .INDENT 0.0 .INDENT 3.5 .sp .EX python3 \-m pyperf compare_to py36.json py38.json \-\-table .EE .UNINDENT .UNINDENT .SS Basic commands .sp pyperformance actions: .INDENT 0.0 .INDENT 3.5 .sp .EX run Run benchmarks on the running python show Display a benchmark file compare Compare two benchmark files list List benchmarks of the running Python list_groups List benchmark groups of the running Python venv Actions on the virtual environment .EE .UNINDENT .UNINDENT .SS Common options .sp Options available to all commands: .INDENT 0.0 .INDENT 3.5 .sp .EX \-h, \-\-help show this help message and exit .EE .UNINDENT .UNINDENT .SS run .sp Run benchmarks on the running python. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance run [\-h] [\-r] [\-f] [\-\-debug\-single\-value] [\-v] [\-m] [\-\-affinity CPU_LIST] [\-o FILENAME] [\-\-append FILENAME] [\-\-manifest MANIFEST] [\-b BM_LIST] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] .EE .UNINDENT .UNINDENT .sp options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-h, \-\-help show this help message and exit \-r, \-\-rigorous Spend longer running tests to get more accurate results \-f, \-\-fast Get rough answers quickly \-\-debug\-single\-value Debug: fastest mode, only compute a single value \-v, \-\-verbose Print more output \-m, \-\-track\-memory Track memory usage. This only works on Linux. \-\-affinity CPU_LIST Specify CPU affinity for benchmark runs. This way, benchmarks can be forced to run on a given CPU to minimize run to run variation. \-o FILENAME, \-\-output FILENAME Run the benchmarks on only one interpreter and write benchmark into FILENAME. Provide only baseline_python, not changed_python. \-\-append FILENAME Add runs to an existing file, or create it if it doesn\(aqt exist \-\-manifest MANIFEST benchmark manifest file to use \-b BM_LIST, \-\-benchmarks BM_LIST Comma\-separated list of benchmarks to run. Can contain both positive and negative arguments: \-\-benchmarks=run_this,also_this,\-not_this. If there are no positive arguments, we\(aqll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments. \-\-inherit\-environ VAR_LIST Comma\-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. \-p PYTHON, \-\-python PYTHON Python executable (default: use running Python) \-\-same\-loops SAME_LOOPS Use the same number of loops as a previous run (i.e., don\(aqt recalibrate). Should be a path to a .json file from a previous run. .EE .UNINDENT .UNINDENT .SS show .sp Display a benchmark file. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX show FILENAME .EE .UNINDENT .UNINDENT .sp positional arguments: .INDENT 0.0 .INDENT 3.5 .sp .EX FILENAME .EE .UNINDENT .UNINDENT .SS compare .sp Compare two benchmark files. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance compare [\-h] [\-v] [\-O STYLE] [\-\-csv CSV_FILE] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] baseline_file.json changed_file.json .EE .UNINDENT .UNINDENT .sp positional arguments: .INDENT 0.0 .INDENT 3.5 .sp .EX baseline_file.json changed_file.json .EE .UNINDENT .UNINDENT .sp options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-v, \-\-verbose Print more output \-O STYLE, \-\-output_style STYLE What style the benchmark output should take. Valid options are \(aqnormal\(aq and \(aqtable\(aq. Default is normal. \-\-csv CSV_FILE Name of a file the results will be written to, as a three\-column CSV file containing minimum runtimes for each benchmark. \-\-inherit\-environ VAR_LIST Comma\-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. \-p PYTHON, \-\-python PYTHON Python executable (default: use running Python) .EE .UNINDENT .UNINDENT .SS list .sp List benchmarks of the running Python. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance list [\-h] [\-\-manifest MANIFEST] [\-b BM_LIST] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] .EE .UNINDENT .UNINDENT .sp options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-manifest MANIFEST benchmark manifest file to use \-b BM_LIST, \-\-benchmarks BM_LIST Comma\-separated list of benchmarks to run. Can contain both positive and negative arguments: \-\-benchmarks=run_this,also_this,\-not_this. If there are no positive arguments, we\(aqll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments. \-\-inherit\-environ VAR_LIST Comma\-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. \-p PYTHON, \-\-python PYTHON Python executable (default: use running Python) .EE .UNINDENT .UNINDENT .sp Use \fBpython3 \-m pyperformance list \-b all\fP to list all benchmarks. .SS list_groups .sp List benchmark groups of the running Python. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance list_groups [\-h] [\-\-manifest MANIFEST] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] .EE .UNINDENT .UNINDENT .sp options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-manifest MANIFEST benchmark manifest file to use \-\-inherit\-environ VAR_LIST Comma\-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. \-p PYTHON, \-\-python PYTHON Python executable (default: use running Python) .EE .UNINDENT .UNINDENT .SS venv .sp Actions on the virtual environment. .sp Actions: .INDENT 0.0 .INDENT 3.5 .sp .EX show Display the path to the virtual environment and its status (created or not) create Create the virtual environment recreate Force the recreation of the the virtual environment remove Remove the virtual environment .EE .UNINDENT .UNINDENT .sp Common options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-venv VENV Path to the virtual environment \-\-inherit\-environ VAR_LIST Comma\-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. \-p PYTHON, \-\-python PYTHON Python executable (default: use running Python) .EE .UNINDENT .UNINDENT .SS venv show .sp Display the path to the virtual environment and its status (created or not). .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance venv show [\-h] [\-\-venv VENV] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] .EE .UNINDENT .UNINDENT .SS venv create .sp Create the virtual environment. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance venv create [\-h] [\-\-venv VENV] [\-\-manifest MANIFEST] [\-b BM_LIST] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] .EE .UNINDENT .UNINDENT .sp options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-manifest MANIFEST benchmark manifest file to use \-b BM_LIST, \-\-benchmarks BM_LIST Comma\-separated list of benchmarks to run. Can contain both positive and negative arguments: \-\-benchmarks=run_this,also_this,\-not_this. If there are no positive arguments, we\(aqll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments. .EE .UNINDENT .UNINDENT .SS venv recreate .sp Force the recreation of the the virtual environment. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance venv recreate [\-h] [\-\-venv VENV] [\-\-manifest MANIFEST] [\-b BM_LIST] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] .EE .UNINDENT .UNINDENT .sp options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-manifest MANIFEST benchmark manifest file to use \-b BM_LIST, \-\-benchmarks BM_LIST Comma\-separated list of benchmarks to run. Can contain both positive and negative arguments: \-\-benchmarks=run_this,also_this,\-not_this. If there are no positive arguments, we\(aqll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments. .EE .UNINDENT .UNINDENT .SS venv remove .sp Remove the virtual environment. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance venv remove [\-h] [\-\-venv VENV] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] .EE .UNINDENT .UNINDENT .SS Compile Python to run benchmarks .sp pyperformance actions: .INDENT 0.0 .INDENT 3.5 .sp .EX compile Compile and install CPython and run benchmarks on installed Python compile_all Compile and install CPython and run benchmarks on installed Python on all branches and revisions of CONFIG_FILE upload Upload JSON results to a Codespeed website .EE .UNINDENT .UNINDENT .sp All these commands require a configuration file. .sp Simple configuration usable for \fBcompile\fP (but not for \fBcompile_all\fP nor \fBupload\fP), \fBdoc/benchmark.conf\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX [config] json_dir = ~/prog/python/bench_json [scm] repo_dir = ~/prog/python/master update = True [compile] bench_dir = ~/prog/python/bench_tmpdir [run_benchmark] system_tune = True affinity = 2,3 .EE .UNINDENT .UNINDENT .sp Configuration file sample with comments, \fBdoc/benchmark.conf.sample\fP: .INDENT 0.0 .INDENT 3.5 .sp .EX [config] # Directory where JSON files are written. # \- uploaded files are moved to json_dir/uploaded/ # \- results of patched Python are written into json_dir/patch/ json_dir = ~/json # If True, compile CPython in debug mode (LTO and PGO disabled), # run benchmarks with \-\-debug\-single\-sample, and disable upload. # # Use this option to quickly test a configuration. debug = False [scm] # Directory of CPython source code (Git repository) repo_dir = ~/cpython # Update the Git repository (git fetch)? update = True # Name of the Git remote, used to create revision of # the Git branch. For example, use revision \(aqremotes/origin/3.6\(aq # for the branch \(aq3.6\(aq. git_remote = remotes/origin [compile] # Create files into bench_dir: # \- bench_dir/bench\-xxx.log # \- bench_dir/prefix/: where Python is installed # \- bench_dir/venv/: Virtual environment used by pyperformance bench_dir = ~/bench_tmpdir # Link Time Optimization (LTO)? lto = True # Profiled Guided Optimization (PGO)? pgo = True # The space\-separated list of libraries that are package\-only, # i.e., locally installed but not on header and library paths. # For each such library, determine the install path and add an # appropriate subpath to CFLAGS and LDFLAGS declarations passed # to configure. As an exception, the prefix for openssl, if that # library is present here, is passed via the \-\-with\-openssl # option. Currently, this only works with Homebrew on macOS. # If running on macOS with Homebrew, you probably want to use: # pkg_only = openssl readline sqlite3 xz zlib # The version of zlib shipping with macOS probably works as well, # as long as Apple\(aqs SDK headers are installed. pkg_only = # Install Python? If false, run Python from the build directory # # WARNING: Running Python from the build directory introduces subtle changes # compared to running an installed Python. Moreover, creating a virtual # environment using a Python run from the build directory fails in many cases, # especially on Python older than 3.4. Only disable installation if you # really understand what you are doing! install = True # Specify \(aq\-j\(aq parameter in \(aqmake\(aq command jobs = 8 [run_benchmark] # Run \(dqsudo python3 \-m pyperf system tune\(dq before running benchmarks? system_tune = True # \-\-manifest option for \(aqpyperformance run\(aq manifest = # \-\-benchmarks option for \(aqpyperformance run\(aq benchmarks = # \-\-affinity option for \(aqpyperf system tune\(aq and \(aqpyperformance run\(aq affinity = # Upload generated JSON file? # # Upload is disabled on patched Python, in debug mode or if install is # disabled. upload = False # Configuration to upload results to a Codespeed website [upload] url = environment = executable = project = [compile_all] # List of CPython Git branches branches = default 3.6 3.5 2.7 # List of revisions to benchmark by compile_all [compile_all_revisions] # list of \(aqsha1=\(aq (default branch: \(aqmaster\(aq) or \(aqsha1=branch\(aq # used by the \(dqpyperformance compile_all\(dq command # e.g.: 11159d2c9d6616497ef4cc62953a5c3cc8454afb = .EE .UNINDENT .UNINDENT .SS compile .sp Compile Python, install Python and run benchmarks on the installed Python. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance compile [\-h] [\-\-patch PATCH] [\-U] [\-T] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] config_file revision [branch] .EE .UNINDENT .UNINDENT .sp positional arguments: .INDENT 0.0 .INDENT 3.5 .sp .EX config_file Configuration filename revision Python benchmarked revision branch Git branch .EE .UNINDENT .UNINDENT .sp options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-patch PATCH Patch file \-U, \-\-no\-update Don\(aqt update the Git repository \-T, \-\-no\-tune Don\(aqt run \(aqpyperf system tune\(aq to tune the system for benchmarks \-\-inherit\-environ VAR_LIST Comma\-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. \-p PYTHON, \-\-python PYTHON Python executable (default: use running Python) .EE .UNINDENT .UNINDENT .sp Notes: .INDENT 0.0 .IP \(bu 2 PGO is broken on Ubuntu 14.04 LTS with GCC 4.8.4\-2ubuntu1~14.04: \fBModules/socketmodule.c:7743:1: internal compiler error: in edge_badness, at ipa\-inline.c:895\fP .UNINDENT .SS compile_all .sp Compile all branches and revisions of CONFIG_FILE. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance compile_all [\-h] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] config_file .EE .UNINDENT .UNINDENT .sp positional arguments: .INDENT 0.0 .INDENT 3.5 .sp .EX config_file Configuration filename .EE .UNINDENT .UNINDENT .sp options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-inherit\-environ VAR_LIST Comma\-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. \-p PYTHON, \-\-python PYTHON Python executable (default: use running Python) .EE .UNINDENT .UNINDENT .SS upload .sp Upload results from a JSON file to a Codespeed website. .sp Usage: .INDENT 0.0 .INDENT 3.5 .sp .EX pyperformance upload [\-h] [\-\-inherit\-environ VAR_LIST] [\-p PYTHON] config_file json_file .EE .UNINDENT .UNINDENT .sp positional arguments: .INDENT 0.0 .INDENT 3.5 .sp .EX config_file Configuration filename json_file JSON filename .EE .UNINDENT .UNINDENT .sp options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-inherit\-environ VAR_LIST Comma\-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. \-p PYTHON, \-\-python PYTHON Python executable (default: use running Python) .EE .UNINDENT .UNINDENT .SS How to get stable benchmarks .INDENT 0.0 .IP \(bu 2 Run \fBpython3 \-m pyperf system tune\fP command .IP \(bu 2 Compile Python using LTO (Link Time Optimization) and PGO (profile guided optimizations): use the \fI\%pyperformance compile\fP command with uses LTO and PGO by default .IP \(bu 2 See advices of the pyperf documentation: \X'tty: link http://pyperf.readthedocs.io/en/latest/run_benchmark.html#how-to-get-reproductible-benchmark-results'\fI\%How to get reproductible benchmark results\fP\X'tty: link'\&. .UNINDENT .SS pyperformance virtual environment .sp To run benchmarks, pyperformance first creates a virtual environment. It installs requirements with fixed versions to get a reproductible environment. The system Python has unknown module installed with unknown versions, and can have \fB\&.pth\fP files run at Python startup which can modify Python behaviour or at least slow down Python startup. .SS What is the goal of pyperformance .sp A benchmark is always written for a specific purpose. Depending how the benchmark is written and how the benchmark is run, the result can be different and so have a different meaning. .sp The pyperformance benchmark suite has multiple goals: .INDENT 0.0 .IP \(bu 2 Help to detect performance regression in a Python implementation .IP \(bu 2 Validate that an optimization change makes Python faster and don\(aqt performance regressions, or only minor regressions .IP \(bu 2 Compare two implementations of Python, for example CPython and PyPy .IP \(bu 2 Showcase of Python performance which ideally would be representative of performances of applications running on production .UNINDENT .SS Don\(aqt disable GC nor ASLR .sp The pyperf module and pyperformance benchmarks are designed to produce reproductible results, but not at the price of running benchmarks in a special mode which would not be used to run applications in production. For these reasons, the Python garbage collector, Python randomized hash function and system ASLR (Address Space Layout Randomization) are \fBnot disabled\fP\&. Benchmarks don\(aqt call \fBgc.collect()\fP neither since CPython implements it with \X'tty: link https://en.wikipedia.org/wiki/Tracing_garbage_collection#Stop-the-world_vs._incremental_vs._concurrent'\fI\%stop\-the\-world\fP\X'tty: link' and so applications don\(aqt call it to not kill performances. .SS Include outliers and spikes .sp Moreover, while the pyperf documentation explains how to reduce the random noise of the system and other applications, some benchmarks use the system and so can get different timing depending on the system workload, depending on I/O performances, etc. Outliers and temporary spikes in results are \fBnot automatically removed\fP: values are summarized by computing the average (arithmetic mean) and standard deviation which \(dqcontains\(dq these spikes, instead of using median and the median absolute deviation for example which to ignore outliers. It is deliberate choice since applications running in production are impacted by such temporary slowdown caused by various things like a garbage collection or a JIT compilation. .SS Warmups and steady state .sp A borderline issue are the benchmarks \(dqwarmups\(dq. The first values of each worker process are always slower: 10% slower in the best case, it can be 1000% slower or more on PyPy. Right now (2017\-04\-14), pyperformance ignore first values considered as warmup until a benchmark reachs its \(dqsteady state\(dq. The \(dqsteady state\(dq can include temporary spikes every 5 values (ex: caused by the garbage collector), and it can still imply further JIT compiler optimizations but with a \(dqlow\(dq impact on the average pyperformance. .sp To be clear \(dqwarmup\(dq and \(dqsteady state\(dq are a work\-in\-progress and a very complex topic, especially on PyPy and its JIT compiler. .SS Notes .sp Tool for comparing the performance of two Python implementations. .sp pyperformance will run Student\(aqs two\-tailed T test on the benchmark results at the 95% confidence level to indicate whether the observed difference is statistically significant. .sp Omitting the \fB\-b\fP option will result in the default group of benchmarks being run Omitting \fB\-b\fP is the same as specifying \fI\-b default\fP\&. .sp To run every benchmark pyperformance knows about, use \fB\-b all\fP\&. To see a full list of all available benchmarks, use \fI\-\-help\fP\&. .sp Negative benchmarks specifications are also supported: \fI\-b \-2to3\fP will run every benchmark in the default group except for 2to3 (this is the same as \fI\-b default,\-2to3\fP). \fI\-b all,\-django\fP will run all benchmarks except the Django templates benchmark. Negative groups (e.g., \fI\-b \-default\fP) are not supported. Positive benchmarks are parsed before the negative benchmarks are subtracted. .sp If \fB\-\-track_memory\fP is passed, pyperformance will continuously sample the benchmark\(aqs memory usage. This currently only works on Linux 2.6.16 and higher or Windows with PyWin32. Because \fB\-\-track_memory\fP introduces performance jitter while collecting memory measurements, only memory usage is reported in the final report. .SH BENCHMARKS .sp Also see \fI\%Custom Benchmarks\fP regarding how to create your own benchmarks or use someone else\(aqs. .SS Available Groups .sp Like individual benchmarks (see \(dqAvailable benchmarks\(dq below), benchmarks group are allowed after the \fI\-b\fP option. Use \fBpython3 \-m pyperformance list_groups\fP to list groups and their benchmarks. .sp Available benchmark groups: .INDENT 0.0 .IP \(bu 2 \fBall\fP: Group including all benchmarks .IP \(bu 2 \fBapps\fP: \(dqHigh\-level\(dq applicative benchmarks (2to3, Chameleon, Tornado HTTP) .IP \(bu 2 \fBdefault\fP: Group of benchmarks run by default by the \fBrun\fP command .IP \(bu 2 \fBmath\fP: Float and integers .IP \(bu 2 \fBregex\fP: Collection of regular expression benchmarks .IP \(bu 2 \fBserialize\fP: Benchmarks on \fBpickle\fP and \fBjson\fP modules .IP \(bu 2 \fBstartup\fP: Collection of microbenchmarks focused on Python interpreter start\-up time. .IP \(bu 2 \fBtemplate\fP: Templating libraries .UNINDENT .sp Use the \fBpython3 \-m pyperformance list_groups\fP command to list groups and their benchmarks. .SS Available Benchmarks .sp In pyperformance 0.5.5, the following microbenchmarks have been removed because they are too short, not representative of real applications and are too unstable. .INDENT 0.0 .IP \(bu 2 \fBcall_method_slots\fP .IP \(bu 2 \fBcall_method_unknown\fP .IP \(bu 2 \fBcall_method\fP .IP \(bu 2 \fBcall_simple\fP .IP \(bu 2 \fBpybench\fP .UNINDENT .SS 2to3 .sp Run the 2to3 tool on the \fBpyperformance/benchmarks/data/2to3/\fP directory: copy of the \fBdjango/core/*.py\fP files of Django 1.1.4, 9 files. .sp Run the \fBpython \-m lib2to3 \-f all \fP command where \fBpython\fP is \fBsys.executable\fP\&. So the test does not only mesure the performance of Python itself, but also the performance of the \fBlib2to3\fP module which can change depending on the Python version. .sp \fBNOTE:\fP .INDENT 0.0 .INDENT 3.5 Files are called \fB\&.py.txt\fP instead of \fB\&.py\fP to not run PEP 8 checks on them, and more generally to not modify them. .UNINDENT .UNINDENT .SS async_tree .sp Async workload benchmark, which calls \fBasyncio.gather()\fP on a tree (6 levels deep, 6 branches per level) with the leaf nodes simulating some [potentially] async work (depending on the benchmark variant). Available variants: .INDENT 0.0 .IP \(bu 2 \fBasync_tree\fP: no actual async work at any leaf node. .IP \(bu 2 \fBasync_tree_io\fP: all leaf nodes simulate async IO workload (async sleep 50ms). .IP \(bu 2 \fBasync_tree_memoization\fP: all leaf nodes simulate async IO workload with 90% of the data memoized. .IP \(bu 2 \fBasync_tree_cpu_io_mixed\fP: half of the leaf nodes simulate CPU\-bound workload (\fBmath.factorial(500)\fP) and the other half simulate the same workload as the \fBasync_tree_memoization\fP variant. .UNINDENT .sp These benchmarks also have an \(dqeager\(dq flavor that uses asyncio eager task factory, if available. .SS chameleon .sp Render a template using the \fBchameleon\fP module to create an HTML table of 500 lignes and 10 columns. .sp See the \fBchameleon.PageTemplate\fP class. .SS chaos .sp Create chaosgame\-like fractals. Command lines options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-thickness THICKNESS Thickness (default: 0.25) \-\-width WIDTH Image width (default: 256) \-\-height HEIGHT Image height (default: 256) \-\-iterations ITERATIONS Number of iterations (default: 5000) \-\-filename FILENAME.PPM Output filename of the PPM picture \-\-rng\-seed RNG_SEED Random number generator seed (default: 1234) .EE .UNINDENT .UNINDENT .sp When \fB\-\-filename\fP option is used, the timing includes the time to create the PPM file. .sp Copyright (C) 2005 Carl Friedrich Bolz [image: Chaos game, bm_chaos benchmark] [image] .sp Image generated by bm_chaos (took 3 sec on CPython 3.5) with the command: .INDENT 0.0 .INDENT 3.5 .sp .EX python3 pyperformance/benchmarks/bm_chaos.py \-\-worker \-l1 \-w0 \-n1 \-\-filename chaos.ppm \-\-width=512 \-\-height=512 \-\-iterations 50000 .EE .UNINDENT .UNINDENT .SS crypto_pyaes .sp benchmark a pure\-Python implementation of the AES block\-cipher in CTR mode using the \fBpyaes\fP module. .sp The benchmark is slower on CPython 3 compared to CPython 2.7, because CPython 3 has no more \(dqsmall int\(dq type (\fBint\fP). The CPython 3 \fBint\fP type now always has an arbitrary size, as CPython 2.7 \fBlong\fP type. .sp See \X'tty: link https://github.com/ricmoo/pyaes'\fI\%pyaes\fP\X'tty: link': A pure\-Python implementation of the AES block cipher algorithm and the common modes of operation (CBC, CFB, CTR, ECB and OFB). .SS deepcopy .sp Benchmark the Python \fIcopy.deepcopy\fP method. The \fIdeepcopy\fP method is performed on a nested dictionary and a dataclass. .SS deltablue .sp DeltaBlue benchmark .sp Ported for the PyPy project. Contributed by Daniel Lindsley .sp This implementation of the DeltaBlue benchmark was directly ported from the \X'tty: link https://github.com/v8/v8/blob/master/benchmarks/deltablue.js'\fI\%V8\(aqs source code\fP\X'tty: link', which was in turn derived from the Smalltalk implementation by John Maloney and Mario Wolczko. The original Javascript implementation was licensed under the GPL. .sp It\(aqs been updated in places to be more idiomatic to Python (for loops over collections, a couple magic methods, \fBOrderedCollection\fP being a list & things altering those collections changed to the builtin methods) but largely retains the layout & logic from the original. (Ugh.) .SS django_template .sp Use the Django template system to build a 150x150\-cell HTML table. .sp Use \fBContext\fP and \fBTemplate\fP classes of the \fBdjango.template\fP module. .SS dulwich_log .sp Iterate on commits of the asyncio Git repository using the Dulwich module. Use \fBpyperformance/benchmarks/data/asyncio.git/\fP repository. .sp Pseudo\-code of the benchmark: .INDENT 0.0 .INDENT 3.5 .sp .EX repo = dulwich.repo.Repo(repo_path) head = repo.head() for entry in repo.get_walker(head): pass .EE .UNINDENT .UNINDENT .sp See the \X'tty: link https://www.dulwich.io/'\fI\%Dulwich project\fP\X'tty: link'\&. .SS docutils .sp Use \X'tty: link https://docutils.sourceforge.io/'\fI\%Docutils\fP\X'tty: link' to convert Docutils\(aq documentation to HTML. Representative of building a medium\-sized documentation set. .SS fannkuch .sp The Computer Language Benchmarks Game: \X'tty: link http://benchmarksgame.alioth.debian.org/'\fI\%http://benchmarksgame.alioth.debian.org/\fP\X'tty: link' .sp Contributed by Sokolov Yura, modified by Tupteq. .SS float .sp Artificial, floating point\-heavy benchmark originally used by Factor. .sp Create 100,000 point objects which compute \fBmath.cos()\fP, \fBmath.sin()\fP and \fBmath.sqrt()\fP .sp Changed in version 0.5.5: Use \fB__slots__\fP on the Point class to focus the benchmark on float rather than testing performance of class attributes. .SS genshi .sp Render a template using Genshi (\fBgenshi.template\fP module): .INDENT 0.0 .IP \(bu 2 \fBgenshi_text\fP: Render a HTML template using the \fBNewTextTemplate\fP class .IP \(bu 2 \fBgenshi_xml\fP: Render an XML template using the \fBMarkupTemplate\fP class .UNINDENT .sp See the \X'tty: link http://pythonhosted.org/Genshi/'\fI\%Genshi project\fP\X'tty: link'\&. .SS go .sp Artificial intelligence playing the Go board game. Use \X'tty: link https://en.wikipedia.org/wiki/Zobrist_hashing'\fI\%Zobrist hashing\fP\X'tty: link'\&. .SS hexiom .sp Solver of Hexiom board game (level 25 by default). Command line option: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-level {2,10,20,25,30,36} Hexiom board level (default: 25) .EE .UNINDENT .UNINDENT .SS hg_startup .sp Get Mercurial\(aqs help screen. .sp Measure the performance of the \fBpython path/to/hg help\fP command using \fBpyperf.Runner.bench_command()\fP, where \fBpython\fP is \fBsys.executable\fP and \fBpath/to/hg\fP is the Mercurial program installed in a virtual environmnent. .sp The \fBbench_command()\fP redirects stdout and stderr into \fB/dev/null\fP\&. .sp See the \X'tty: link https://www.mercurial-scm.org/'\fI\%Mercurial project\fP\X'tty: link'\&. .SS html5lib .sp Parse the \fBpyperformance/benchmarks/data/w3_tr_html5.html\fP HTML file (132 KB) using \fBhtml5lib\fP\&. The file is the HTML 5 specification, but truncated to parse the file in less than 1 second (around 250 ms). .sp On CPython, after 3 warmups, the benchmarks enters a cycle of 5 values: every 5th value is 10% slower. Plot of 1 run of 50 values (the warmup is not rendered): [image: html5lib values] [image] .sp See the \X'tty: link https://html5lib.readthedocs.io/'\fI\%html5lib project\fP\X'tty: link'\&. .SS json_dumps, json_loads .sp Benchmark \fBdumps()\fP and \fBloads()\fP functions of the \fBjson\fP module. .sp \fBbm_json_dumps.py\fP command line option: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-cases CASES Comma separated list of cases. Available cases: EMPTY, SIMPLE, NESTED, HUGE. By default, run all cases. .EE .UNINDENT .UNINDENT .SS logging .sp Benchmarks on the \fBlogging\fP module: .INDENT 0.0 .IP \(bu 2 \fBlogging_format\fP: Benchmark \fBlogger.warn(fmt, str)\fP .IP \(bu 2 \fBlogging_simple\fP: Benchmark \fBlogger.warn(msg)\fP .IP \(bu 2 \fBlogging_silent\fP: Benchmark \fBlogger.debug(msg)\fP when the log is ignored .UNINDENT .sp Script command line option: .INDENT 0.0 .INDENT 3.5 .sp .EX format silent simple .EE .UNINDENT .UNINDENT .sp See the \X'tty: link https://docs.python.org/dev/library/logging.html'\fI\%logging module\fP\X'tty: link'\&. .SS mako .sp Use the Mako template system to build a 150x150\-cell HTML table. Includes: .INDENT 0.0 .IP \(bu 2 two template inherences .IP \(bu 2 HTML escaping, XML escaping, URL escaping, whitespace trimming .IP \(bu 2 function defitions and calls .IP \(bu 2 forloops .UNINDENT .sp See the \X'tty: link http://docs.makotemplates.org/'\fI\%Mako project\fP\X'tty: link'\&. .SS mdp .sp Battle with damages and topological sorting of nodes in a graph. .sp See \X'tty: link https://en.wikipedia.org/wiki/Topological_sorting'\fI\%Topological sorting\fP\X'tty: link'\&. .SS meteor_contest .sp Solver for Meteor Puzzle board. .sp Meteor Puzzle board: \X'tty: link http://benchmarksgame.alioth.debian.org/u32/meteor-description.html#meteor'\fI\%http://benchmarksgame.alioth.debian.org/u32/meteor\-description.html#meteor\fP\X'tty: link' .sp The Computer Language Benchmarks Game: \X'tty: link http://benchmarksgame.alioth.debian.org/'\fI\%http://benchmarksgame.alioth.debian.org/\fP\X'tty: link' .sp Contributed by Daniel Nanz, 2008\-08\-21. .SS nbody .sp N\-body benchmark from the Computer Language Benchmarks Game. Microbenchmark on floating point operations. .sp This is intended to support Unladen Swallow\(aqs perf.py. Accordingly, it has been modified from the Shootout version: .INDENT 0.0 .IP \(bu 2 Accept standard Unladen Swallow benchmark options. .IP \(bu 2 Run report_energy()/advance() in a loop. .IP \(bu 2 Reimplement itertools.combinations() to work with older Python versions. .UNINDENT .sp Pulled from: \X'tty: link http://benchmarksgame.alioth.debian.org/u64q/program.php?test=nbody&lang=python3&id=1'\fI\%http://benchmarksgame.alioth.debian.org/u64q/program.php?test=nbody&lang=python3&id=1\fP\X'tty: link' .sp Contributed by Kevin Carson. Modified by Tupteq, Fredrik Johansson, and Daniel Nanz. .SS python_startup, python_startup_nosite .INDENT 0.0 .IP \(bu 2 \fBpython_startup\fP: Measure the Python startup time, run \fBpython \-c pass\fP where \fBpython\fP is \fBsys.executable\fP .IP \(bu 2 \fBpython_startup_nosite\fP: Measure the Python startup time without importing the \fBsite\fP module, run \fBpython \-S \-c pass\fP where \fBpython\fP is \fBsys.executable\fP .UNINDENT .sp Run the benchmark with \fBpyperf.Runner.bench_command()\fP\&. .SS nqueens .sp Simple, brute\-force N\-Queens solver. .sp See \X'tty: link https://en.wikipedia.org/wiki/Eight_queens_puzzle'\fI\%Eight queens puzzle\fP\X'tty: link'\&. .SS pathlib .sp Test the performance of operations of the \fBpathlib\fP module of the standard library. .sp This benchmark stresses the creation of small objects, globbing, and system calls. .sp See the \X'tty: link https://docs.python.org/dev/library/pathlib.html'\fI\%documentation of the pathlib module\fP\X'tty: link'\&. .SS pickle .sp pickle benchmarks (serialize): .INDENT 0.0 .IP \(bu 2 \fBpickle\fP: use the cPickle module to pickle a variety of datasets. .IP \(bu 2 \fBpickle_dict\fP: microbenchmark; use the cPickle module to pickle a lot of dicts. .IP \(bu 2 \fBpickle_list\fP: microbenchmark; use the cPickle module to pickle a lot of lists. .IP \(bu 2 \fBpickle_pure_python\fP: use the pure\-Python pickle module to pickle a variety of datasets. .UNINDENT .sp unpickle benchmarks (deserialize): .INDENT 0.0 .IP \(bu 2 \fBunpickle\fP: use the cPickle module to unnpickle a variety of datasets. .IP \(bu 2 \fBunpickle_list\fP .IP \(bu 2 \fBunpickle_pure_python\fP: use the pure\-Python pickle module to unpickle a variety of datasets. .UNINDENT .SS pidigits .sp Calculating 2,000 digits of π. This benchmark stresses big integer arithmetic. .sp Command line option: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-digits DIGITS Number of computed pi digits (default: 2000) .EE .UNINDENT .UNINDENT .sp Adapted from code on: \X'tty: link http://benchmarksgame.alioth.debian.org/'\fI\%http://benchmarksgame.alioth.debian.org/\fP\X'tty: link' .SS pyflate .sp Benchmark of a pure\-Python bzip2 decompressor: decompress the \fBpyperformance/benchmarks/data/interpreter.tar.bz2\fP file in memory. .sp Copyright 2006\-\-2007\-01\-21 Paul Sladen: \X'tty: link http://www.paul.sladen.org/projects/compression/'\fI\%http://www.paul.sladen.org/projects/compression/\fP\X'tty: link' .sp You may use and distribute this code under any DFSG\-compatible license (eg. BSD, GNU GPLv2). .sp Stand\-alone pure\-Python DEFLATE (gzip) and bzip2 decoder/decompressor. This is probably most useful for research purposes/index building; there is certainly some room for improvement in the Huffman bit\-matcher. .sp With the as\-written implementation, there was a known bug in BWT decoding to do with repeated strings. This has been worked around; see \(aqbwt_reverse()\(aq. Correct output is produced in all test cases but ideally the problem would be found... .SS raytrace .sp Simple raytracer. .sp Command line options: .INDENT 0.0 .INDENT 3.5 .sp .EX \-\-width WIDTH Image width (default: 100) \-\-height HEIGHT Image height (default: 100) \-\-filename FILENAME.PPM Output filename of the PPM picture .EE .UNINDENT .UNINDENT .sp This file contains definitions for a simple raytracer. Copyright Callum and Tony Garnock\-Jones, 2008. .sp This file may be freely redistributed under the MIT license, \X'tty: link http://www.opensource.org/licenses/mit-license.php'\fI\%http://www.opensource.org/licenses/mit\-license.php\fP\X'tty: link' .sp From \X'tty: link https://leastfixedpoint.com/tonyg/kcbbs/lshift_archive/toy-raytracer-in-python-20081029.html'\fI\%https://leastfixedpoint.com/tonyg/kcbbs/lshift_archive/toy\-raytracer\-in\-python\-20081029.html\fP\X'tty: link' [image: Pure Python raytracer] [image] .sp Image generated by the command (took 68.4 sec on CPython 3.5): .INDENT 0.0 .INDENT 3.5 .sp .EX python3 pyperformance/benchmarks/bm_raytrace.py \-\-worker \-\-filename=raytrace.ppm \-l1 \-w0 \-n1 \-v \-\-width=800 \-\-height=600 .EE .UNINDENT .UNINDENT .SS regex_compile .sp Stress the performance of Python\(aqs regex compiler, rather than the regex execution speed. .sp Benchmark how quickly Python\(aqs regex implementation can compile regexes. .sp We bring in all the regexes used by the other regex benchmarks, capture them by stubbing out the re module, then compile those regexes repeatedly. We muck with the re module\(aqs caching to force it to recompile every regex we give it. .SS regex_dna .sp regex DNA benchmark using \(dqfasta\(dq to generate the test case. .sp The Computer Language Benchmarks Game \X'tty: link http://benchmarksgame.alioth.debian.org/'\fI\%http://benchmarksgame.alioth.debian.org/\fP\X'tty: link' .sp regex\-dna Python 3 #5 program: contributed by Dominique Wahli 2to3 modified by Justin Peel .sp fasta Python 3 #3 program: modified by Ian Osgood modified again by Heinrich Acker modified by Justin Peel Modified by Christopher Sean Forgeron .SS regex_effbot .sp Some of the original benchmarks used to tune mainline Python\(aqs current regex engine. .SS regex_v8 .sp Python port of V8\(aqs regex benchmark. .sp Automatically generated on 2009\-01\-30. .sp This benchmark is generated by loading 50 of the most popular pages on the web and logging all regexp operations performed. Each operation is given a weight that is calculated from an estimate of the popularity of the pages where it occurs and the number of times it is executed while loading each page. Finally the literal letters in the data are encoded using ROT13 in a way that does not affect how the regexps match their input. .sp Ported to Python for Unladen Swallow. The original JS version can be found at \X'tty: link https://github.com/v8/v8/blob/master/benchmarks/regexp.js'\fI\%https://github.com/v8/v8/blob/master/benchmarks/regexp.js\fP\X'tty: link', r1243. .SS richards .sp The classic Python Richards benchmark. .sp Based on a Java version. .sp Based on original version written in BCPL by Dr Martin Richards in 1981 at Cambridge University Computer Laboratory, England and a C++ version derived from a Smalltalk version written by L Peter Deutsch. .sp Java version: Copyright (C) 1995 Sun Microsystems, Inc. Translation from C++, Mario Wolczko Outer loop added by Alex Jacoby .SS scimark .INDENT 0.0 .IP \(bu 2 \fBscimark_sor\fP: \X'tty: link https://en.wikipedia.org/wiki/Successive_over-relaxation'\fI\%Successive over\-relaxation (SOR)\fP\X'tty: link' benchmark .IP \(bu 2 \fBscimark_sparse_mat_mult\fP: \X'tty: link https://en.wikipedia.org/wiki/Sparse_matrix'\fI\%sparse matrix\fP\X'tty: link' \X'tty: link https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm'\fI\%multiplication\fP\X'tty: link' benchmark .IP \(bu 2 \fBscimark_monte_carlo\fP: benchmark on the \X'tty: link https://en.wikipedia.org/wiki/Monte_Carlo_algorithm'\fI\%Monte Carlo algorithm\fP\X'tty: link' to compute the area of a disc .IP \(bu 2 \fBscimark_lu\fP: \X'tty: link https://en.wikipedia.org/wiki/LU_decomposition'\fI\%LU decomposition\fP\X'tty: link' benchmark .IP \(bu 2 \fBscimark_fft\fP: \X'tty: link https://en.wikipedia.org/wiki/Fast_Fourier_transform'\fI\%Fast Fourier transform (FFT)\fP\X'tty: link' benchmark .UNINDENT .SS spectral_norm .sp MathWorld: \(dqHundred\-Dollar, Hundred\-Digit Challenge Problems\(dq, Challenge #3. \X'tty: link http://mathworld.wolfram.com/Hundred-DollarHundred-DigitChallengeProblems.html'\fI\%http://mathworld.wolfram.com/Hundred\-DollarHundred\-DigitChallengeProblems.html\fP\X'tty: link' .sp The Computer Language Benchmarks Game \X'tty: link http://benchmarksgame.alioth.debian.org/u64q/spectralnorm-description.html#spectralnorm'\fI\%http://benchmarksgame.alioth.debian.org/u64q/spectralnorm\-description.html#spectralnorm\fP\X'tty: link' .sp Contributed by Sebastien Loisel. Fixed by Isaac Gouy. Sped up by Josh Goldfoot. Dirtily sped up by Simon Descarpentries. Concurrency by Jason Stitt. .SS sqlalchemy_declarative, sqlalchemy_imperative .INDENT 0.0 .IP \(bu 2 \fBsqlalchemy_declarative\fP: SQLAlchemy Declarative benchmark using SQLite .IP \(bu 2 \fBsqlalchemy_imperative\fP: SQLAlchemy Imperative benchmark using SQLite .UNINDENT .sp See the \X'tty: link https://www.sqlalchemy.org/'\fI\%SQLAlchemy project\fP\X'tty: link'\&. .SS sqlite_synth .sp Benchmark Python aggregate for SQLite. .sp The goal of the benchmark (written for PyPy) is to test CFFI performance and going back and forth between SQLite and Python a lot. Therefore the queries themselves are really simple. .sp See the \X'tty: link https://www.sqlite.org/'\fI\%SQLite project\fP\X'tty: link' and the \X'tty: link https://docs.python.org/dev/library/sqlite3.html'\fI\%Python sqlite3 module (stdlib)\fP\X'tty: link'\&. .SS sympy .sp Benchmark on the \fBsympy\fP module: .INDENT 0.0 .IP \(bu 2 \fBsympy_expand\fP: Benchmark \fBsympy.expand()\fP .IP \(bu 2 \fBsympy_integrate\fP: Benchmark \fBsympy.integrate()\fP .IP \(bu 2 \fBsympy_str\fP: Benchmark \fBstr(sympy.expand())\fP .IP \(bu 2 \fBsympy_sum\fP: Benchmark \fBsympy.summation()\fP .UNINDENT .sp On CPython, some \fBsympy_sum\fP values are 5%\-10% slower: .INDENT 0.0 .INDENT 3.5 .sp .EX $ python3 \-m pyperf dump sympy_sum.json Run 1: 1 warmup, 50 values, 1 loop \- warmup 1: 404 ms (+63%) \- value 1: 244 ms \- value 2: 245 ms \- value 3: 258 ms <\-\-\-\- \- value 4: 245 ms \- value 5: 245 ms \- value 6: 279 ms (+12%) <\-\-\-\- \- value 7: 246 ms \- value 8: 244 ms \- value 9: 245 ms \- value 10: 255 ms <\-\-\-\- \- value 11: 245 ms \- value 12: 245 ms \- value 13: 256 ms <\-\-\-\- \- value 14: 248 ms \- value 15: 245 ms \- value 16: 245 ms \&... .EE .UNINDENT .UNINDENT .sp Plot of 1 run of 50 values (the warmup is not rendered): [image: sympy_sum values] [image] .sp See the \X'tty: link http://www.sympy.org/'\fI\%sympy project\fP\X'tty: link'\&. .SS telco .sp Telco Benchmark for measuring the performance of decimal calculations: .INDENT 0.0 .IP \(bu 2 \X'tty: link http://speleotrove.com/decimal/telco.html'\fI\%http://speleotrove.com/decimal/telco.html\fP\X'tty: link' .IP \(bu 2 \X'tty: link http://speleotrove.com/decimal/telcoSpec.html'\fI\%http://speleotrove.com/decimal/telcoSpec.html\fP\X'tty: link' .IP \(bu 2 A call type indicator, \fBc\fP, is set from the bottom (least significant) bit of the duration (hence \fBc\fP is 0 or 1). .IP \(bu 2 A rate, \fBr\fP, is determined from the call type. Those calls with \fBc=0\fP have a low \fBr\fP: \fB0.0013\fP; the remainder (‘distance calls’) have a ‘premium’ \fBr\fP: \fB0.00894\fP\&. (The rates are, very roughly, in Euros or dollarates per second.) .IP \(bu 2 A price, \fBp\fP, for the call is then calculated (\fBp=r*n\fP). This is rounded to exactly 2 fractional digits using round\-half\-even (Banker’s round to nearest). .IP \(bu 2 A basic tax, \fBb\fP, is calculated: \fBb=p*0.0675\fP (6.75%). This is truncated to exactly 2 fractional digits (round\-down), and the total basic tax variable is then incremented (\fBsumB=sumB+b\fP). .IP \(bu 2 For distance calls: a distance tax, \fBd\fP, is calculated: \fBd=p*0.0341\fP (3.41%). This is truncated to exactly 2 fractional digits (round\-down), and then the total distance tax variable is incremented (\fBsumD=sumD+d\fP). .IP \(bu 2 The total price, \fBt\fP, is calculated (\fBt=p+b\fP, and, if a distance call, \fBt=t+d\fP). .IP \(bu 2 The total prices variable is incremented (\fBsumT=sumT+t\fP). .IP \(bu 2 The total price, \fBt\fP, is converted to a string, \fBs\fP\&. .UNINDENT .sp The Python benchmark is implemented with the \fBdecimal\fP module. .sp See the \X'tty: link https://docs.python.org/dev/library/decimal.html'\fI\%Python decimal module (stdlib)\fP\X'tty: link'\&. .SS tornado_http .sp Benchmark HTTP server of the \fBtornado\fP module .sp See the \X'tty: link http://www.tornadoweb.org/'\fI\%Tornado project\fP\X'tty: link'\&. .SS unpack_sequence .sp Microbenchmark for unpacking lists and tuples. .sp Pseudo\-code: .INDENT 0.0 .INDENT 3.5 .sp .EX a, b, c, d, e, f, g, h, i, j = to_unpack .EE .UNINDENT .UNINDENT .sp where \fBto_unpack\fP is \fBtuple(range(10))\fP or \fBlist(range(10))\fP\&. .SS xml_etree .sp Benchmark the \fBElementTree\fP API of the \fBxml.etree\fP module: .INDENT 0.0 .IP \(bu 2 \fBxml_etree_generate\fP: Create an XML document .IP \(bu 2 \fBxml_etree_iterparse\fP: Benchmark \fBetree.iterparse()\fP .IP \(bu 2 \fBxml_etree_parse\fP: Benchmark \fBetree.parse()\fP .IP \(bu 2 \fBxml_etree_process\fP: Process an XML document .UNINDENT .sp See the \X'tty: link https://docs.python.org/dev/library/xml.etree.elementtree.html'\fI\%Python xml.etree.ElementTree module (stdlib)\fP\X'tty: link'\&. .SH CUSTOM BENCHMARKS .sp pyperformance includes its own set of benchmarks (see \fI\%Benchmarks\fP). However, it also supports using custom benchmarks. .SS Using Custom Benchmarks .sp To use custom benchmarks, you will need to use the \fB\-\-manifest\fP CLI option and provide the path to the manifest file describing those benchmarks. .SS The pyperformance File Formats .sp \fBpyperformance\fP uses two file formats to identify benchmarks: .INDENT 0.0 .IP \(bu 2 manifest \- a set of benchmarks .IP \(bu 2 metadata \- a single benchmark .UNINDENT .sp For each benchmark, there are two required files and several optional ones. Those files are expected to be in a specific directory structure (unless customized in the metadata). .sp The structure (see below) is such that it\(aqs easy to maintain a benchmark (or set of benchmarks) on GitHub and distribute it on PyPI. It also simplifies publishing a Python project\(aqs benchmarks. The alternative is pointing people at a repo. .sp Benchmarks can inherit metadata from other metadata files. This is useful for keeping common metadata for a set of benchmarks (e.g. \(dqversion\(dq) in one file. Likewise, benchmarks for a Python project can inherit metadata from the project\(aqs pyproject.toml. .sp Sometimes a benchmark will have one or more variants that run using the same script. Variants like this are supported by \fBpyperformance\fP without requiring much extra effort. .SS Benchmark Directory Structure .sp Normally a benchmark is structured like this: .INDENT 0.0 .INDENT 3.5 .sp .EX bm_NAME/ data/ # if needed requirements.txt # lock file, if any pyproject.toml run_benchmark.py .EE .UNINDENT .UNINDENT .sp (Note the \(dqbm_\(dq prefix on the directory name.) .sp \(dqpyproject.toml\(dq holds the metadata. \(dqrun_benchmark.py\(dq holds the actual benchmark code. Both are necessary. .sp \fBpyperformance\fP treats the metadata file as the fundamental source of information about a benchmark. A manifest for a set of benchmarks is effectively a mapping of names to metadata files. So a metadata file is essential. It can be located anywhere on disk. However, if it isn\(aqt located in the structure described above then the metadata must identify where to find the other files. .sp Other than that, only a benchmark script (e.g. \(dqrun_benchmark.py\(dq above) is required. All other files are optional. .sp When a benchmark has variants, each has its own metadata file next to the normal \(dqpyproject.toml\(dq, named \(dqbm_NAME.toml\(dq. (Note the \(dqbm_\(dq prefix.) The format of variant metadata files is exactly the same. \fBpyperformance\fP treats them the same, except that the sibling \(dqpyproject.toml\(dq is inherited by default. .SS Manifest Files .sp A manifest file identifies a set of benchmarks, as well as (optionally) how they should be grouped. \fBpyperformance\fP uses the manifest to determine which benchmarks are available to run (and thus which to run by default). .sp A manifest normally looks like this: .INDENT 0.0 .INDENT 3.5 .sp .EX [benchmarks] name metafile bench1 somedir/bm_bench1/pyproject.toml bench2 somedir/pyproject.toml bench3 ../anotherdir .EE .UNINDENT .UNINDENT .sp The \(dqbenchmarks\(dq section is a table with rows of tab\-separated\-values. The \(dqname\(dq value is how \fBpyperformance\fP will identify the benchmark. The \(dqmetafile\(dq value is where \fBpyperformance\fP will look for the benchmark\(aqs metadata. If a metafile is a directory then it looks for \(dqpyproject.toml\(dq in that directory. .SS Benchmark Groups .sp The other sections in the manifest file relate to grouping: .INDENT 0.0 .INDENT 3.5 .sp .EX [benchmarks] name metafile bench1 somedir/bm_bench1 bench2 somedir/bm_bench2 bench3 anotherdir/mybench.toml [groups] tag1 tag2 [group default] bench2 bench3 [group tricky] bench2 .EE .UNINDENT .UNINDENT .sp The \(dqgroups\(dq section specifies available groups that may be identified by benchmark tags (see about tags in the metadata section below). Any other group sections in the manifest are automatically added to the list of available groups. .sp If no \(dqdefault\(dq group is specified then one is automatically added with all benchmarks from the \(dqbenchmarks\(dq section in it. If there is no \(dqgroups\(dq section and no individual group sections (other than \(dqdefault\(dq) then the set of all tags of the known benchmarks is treated as \(dqgroups\(dq. A group named \(dqall\(dq as also automatically added which has all known benchmarks in it. .sp Benchmarks can be excluded from a group by using a \fB\-\fP (minus) prefix. Any benchmark alraedy in the list (at that point) that matches will be dropped from the list. If the first entry in the section is an exclusion then all known benchmarks are first added to the list before the exclusion is applied. .sp For example: .INDENT 0.0 .INDENT 3.5 .sp .EX [benchmarks] name metafile bench1 somedir/bm_bench1 bench2 somedir/bm_bench2 bench3 anotherdir/mybench.toml [group default] \-bench1 .EE .UNINDENT .UNINDENT .sp This means by default only \(dqbench2\(dq and \(dqbench3\(dq are run. .SS Merging Manifests .sp To combine manifests, use the \fB[includes]\fP section in the manifest: .INDENT 0.0 .INDENT 3.5 .sp .EX [includes] project1/benchmarks/MANIFEST project2/benchmarks/MANIFEST .EE .UNINDENT .UNINDENT .sp Note that \fB\fP is the same as including the manifest file for the default pyperformance benchmarks. .SS A Local Benchmark Suite .sp Often a project will have more than one benchmark that it will treat as a suite. \fBpyperformance\fP handles this without any extra work. .sp In the dirctory holding the manifest file put all the benchmarks. Then put \fB\fP in the \(dqmetafile\(dq column, like this: .INDENT 0.0 .INDENT 3.5 .sp .EX [benchmarks] name metafile bench1 bench2 bench3 bench4 bench5 .EE .UNINDENT .UNINDENT .sp It will look for \fBDIR/bm_NAME/pyproject.toml\fP\&. .sp If there are also variants, identify the main benchmark in the \(dqmetafile\(dq value, like this: .INDENT 0.0 .INDENT 3.5 .sp .EX [benchmarks] name metafile bench1 bench2 bench3 variant1 variant2 .EE .UNINDENT .UNINDENT .sp \fBpyperformance\fP will look for \fBDIR/bm_BASE/bm_NAME.toml\fP, where \(dqBASE\(dq is the part after \(dqlocal:\(dq. .SS A Project\(aqs Benchmark Suite .sp A Python project can identify its benchmark suite by putting the path to the manifest file in the project\(aqs top\-level pyproject.toml. Additional manifests can be identified as well: .INDENT 0.0 .INDENT 3.5 .sp .EX [tool.pyperformance] manifest = \(dq...\(dq manifests = [\(dq...\(dq, \(dq...\(dq] .EE .UNINDENT .UNINDENT .sp (Reminder: that is the pyproject.toml, not the manifest file.) .SS Benchmark Metadata Files .sp A benchmark\(aqs metadata file (usually pyproject.toml) follows the format specified in \X'tty: link https://www.python.org/dev/peps/pep-0621'\fI\%PEP 621\fP\X'tty: link' and \X'tty: link https://www.python.org/dev/peps/pep-0518'\fI\%PEP 518\fP\X'tty: link'\&. So there are two supported sections in the file: \(dqproject\(dq and \(dqtool.pyperformance\(dq. .sp A typical metadata file will look something like this: .INDENT 0.0 .INDENT 3.5 .sp .EX [project] version = \(dq0.9.1\(dq dependencies = [\(dqpyperf\(dq] dynamic = [\(dqname\(dq] [tool.pyperformance] name = \(dqmy_benchmark\(dq .EE .UNINDENT .UNINDENT .sp A highly detailed one might look like this: .INDENT 0.0 .INDENT 3.5 .sp .EX [project] name = \(dqpyperformance_bm_json_dumps\(dq version = \(dq0.9.1\(dq description = \(dqA benchmark for json.dumps()\(dq requires\-python = \(dq>=3.8\(dq dependencies = [\(dqpyperf\(dq] urls = {repository = \(dqhttps://github.com/python/pyperformance\(dq} dynamic = [\(dqversion\(dq] [tool.pyperformance] name = \(dqjson_dumps\(dq tags = \(dqserialize\(dq runscript = \(dqbench.py\(dq datadir = \(dq.data\-files/extras\(dq extra_opts = [\(dq\-\-special\(dq] .EE .UNINDENT .UNINDENT .SS Inheritance .sp For one benchmark to inherit from another (or from common metadata), the \(dqinherits\(dq field is available: .INDENT 0.0 .INDENT 3.5 .sp .EX [project] dependencies = [\(dqpyperf\(dq] dynamic = [\(dqname\(dq, \(dqversion\(dq] [tool.pyperformance] name = \(dqmy_benchmark\(dq inherits = \(dq../common.toml\(dq .EE .UNINDENT .UNINDENT .sp All values in either section of the inherited metadata are treated as defaults, on top of which the current metadata is applied. In the above example, for instance, a value for \(dqversion\(dq in common.toml would be used here. .sp If the \(dqinherits\(dq value is a directory (even for \(dq..\(dq) then \(dqbase.toml\(dq in that directory will be inherited. .sp For variants, the base pyproject.toml is the default value for \(dqinherits\(dq. .SS Inferred Values .sp In some situations, omitted values will be inferred from other available data (even for required fields). .INDENT 0.0 .IP \(bu 2 \fBproject.name\fP <= \fBtool.pyperformance.name\fP .IP \(bu 2 \fBproject.*\fP <= inherited metadata (except for \(dqname\(dq and \(dqdynamic\(dq) .IP \(bu 2 \fBtool.pyperformance.name\fP <= metadata filename .IP \(bu 2 \fBtool.pyperformance.*\fP <= inherited metadata (except for \(dqname\(dq and \(dqinherits\(dq) .UNINDENT .sp When the name is inferred from the filename for a regularly structured benchmark, the \(dqbm_\(dq prefix is removed from the benchmark\(aqs directory. If it is a variant that prefix is removed from the metadata filename, as well as the .toml suffix. .SS The \fB[project]\fP Section .TS box center; l|l|l|l|l|l. T{ field T} T{ type T} T{ R T} T{ T T} T{ B T} T{ D T} _ T{ project.name T} T{ str T} T{ X T} T{ X T} T{ T} T{ T} _ T{ project.version T} T{ ver T} T{ X T} T{ T} T{ X T} T{ X T} _ T{ project.dependencies T} T{ [str] T} T{ T} T{ T} T{ X T} T{ T} _ T{ project.dynamic T} T{ [str] T} T{ T} T{ T} T{ T} T{ T} .TE .sp \(dqR\(dq: required \(dqT\(dq: inferred from the tool section \(dqB\(dq: inferred from the inherited metadata \(dqD\(dq: for default benchmarks, inferred from pyperformance .sp \(dqdynamic\(dq is required by PEP 621 for when a field will be filled in dynamically by the tool. This is especially important for required fields. .sp All other PEP 621 fields are optional (e.g. \fBrequires\-python = \(dq>=3.8\(dq\fP, \fB{repository = \(dqhttps://github.com/...\(dq}\fP). .SS The \fB[tool.pyperformance]\fP Section .TS box center; l|l|l|l|l. T{ field T} T{ type T} T{ R T} T{ B T} T{ F T} _ T{ tool.name T} T{ str T} T{ X T} T{ T} T{ X T} _ T{ tool.tags T} T{ [str] T} T{ T} T{ X T} T{ T} _ T{ tool.extra_opts T} T{ [str] T} T{ T} T{ X T} T{ T} _ T{ tool.inherits T} T{ file T} T{ T} T{ T} T{ T} _ T{ tool.runscript T} T{ file T} T{ T} T{ X T} T{ T} _ T{ tool.datadir T} T{ file T} T{ T} T{ X T} T{ T} .TE .sp \(dqR\(dq: required \(dqB\(dq: inferred from the inherited metadata \(dqF\(dq: inferred from filename .INDENT 0.0 .IP \(bu 2 tags: optional list of names to group benchmarks .IP \(bu 2 extra_opts: optional list of args to pass to \fBtool.runscript\fP .IP \(bu 2 runscript: the benchmark script to use instead of run_benchmark.py. .UNINDENT .SH CPYTHON RESULTS, 2017 .sp This page lists benchmarks which became faster in CPython. .SS Optimizations .SS 2016\-12\-14: speedup method calls .sp Optimization: \X'tty: link https://bugs.python.org/issue26110'\fI\%Speedup method calls 1.2x\fP\X'tty: link', \X'tty: link https://github.com/python/cpython/commit/f2392133eba777f05947a8996c507690b95379c3'\fI\%commit f2392133\fP\X'tty: link'\&. .TS box center; l|l|l. T{ Benchmark T} T{ 2016\-12\-01 (27580c1fb5e8) T} T{ 2017\-01\-01 (67e1aa0b58be) T} _ T{ call_method T} T{ 14.1 ms T} T{ 11.2 ms: 1.26x faster (\-21%) T} _ T{ call_method_slots T} T{ 13.9 ms T} T{ 11.1 ms: 1.25x faster (\-20%) T} _ T{ call_method_unknown T} T{ 16.0 ms T} T{ 14.3 ms: 1.12x faster (\-11%) T} .TE .SS 2016\-04\-22: pymalloc allocator .sp Optimization: \X'tty: link http://bugs.python.org/issue26249'\fI\%PyMem_Malloc() now uses the fast pymalloc allocator\fP\X'tty: link', \X'tty: link https://github.com/python/cpython/commit/f5c4b99034fae12ac2b9498dd12b5b3f352b90c8'\fI\%commit f5c4b990\fP\X'tty: link'\&. .sp Changes of at least 5%: .TS box center; l|l|l. T{ Benchmark T} T{ 2016\-04\-21 (5439fc4901db) T} T{ 2016\-04\-22 (f5c4b99034fa) T} _ T{ unpickle_list T} T{ 10.4 us T} T{ 7.64 us: 1.36x faster (\-27%) T} _ T{ json_dumps T} T{ 28.0 ms T} T{ 25.2 ms: 1.11x faster (\-10%) T} _ T{ unpickle_pure_python T} T{ 741 us T} T{ 678 us: 1.09x faster (\-9%) T} _ T{ unpickle T} T{ 33.9 us T} T{ 31.3 us: 1.08x faster (\-8%) T} _ T{ meteor_contest T} T{ 197 ms T} T{ 183 ms: 1.08x faster (\-7%) T} _ T{ mako T} T{ 36.9 ms T} T{ 34.3 ms: 1.07x faster (\-7%) T} _ T{ pathlib T} T{ 41.0 ms T} T{ 38.4 ms: 1.07x faster (\-6%) T} _ T{ call_method_slots T} T{ 14.8 ms T} T{ 13.9 ms: 1.07x faster (\-6%) T} _ T{ telco T} T{ 19.5 ms T} T{ 18.3 ms: 1.07x faster (\-6%) T} _ T{ scimark_lu T} T{ 413 ms T} T{ 388 ms: 1.07x faster (\-6%) T} _ T{ nqueens T} T{ 221 ms T} T{ 207 ms: 1.07x faster (\-6%) T} _ T{ fannkuch T} T{ 937 ms T} T{ 882 ms: 1.06x faster (\-6%) T} _ T{ regex_compile T} T{ 319 ms T} T{ 301 ms: 1.06x faster (\-6%) T} _ T{ raytrace T} T{ 1.16 sec T} T{ 1.09 sec: 1.06x faster (\-5%) T} _ T{ pickle_pure_python T} T{ 1.11 ms T} T{ 1.05 ms: 1.05x faster (\-5%) T} _ T{ genshi_text T} T{ 70.1 ms T} T{ 66.6 ms: 1.05x faster (\-5%) T} .TE .SS 2015\-12\-07: Optimize ElementTree.iterparse(), xml_etree_iterparse .sp Optimization: \X'tty: link http://bugs.python.org/issue25638'\fI\%Issue #25638: Optimized ElementTree.iterparse(); it is now 2x faster\fP\X'tty: link', \X'tty: link https://github.com/python/cpython/commit/9ec5e25f26a490510bb5da5c26a276cd30a263a0'\fI\%commit 9ec5e25f2\fP\X'tty: link'\&. .TS box center; l|l|l. T{ Benchmark T} T{ 2015\-12\-01 (df144092a340) T} T{ 2016\-01\-01 (71db90356390) T} _ T{ xml_etree_iterparse T} T{ 423 ms T} T{ 206 ms: 2.05x faster (\-51%) T} .TE .SS 2015\-09\-19: PGO uses test suite, pidigits .sp Optimization: \X'tty: link http://bugs.python.org/issue24915'\fI\%Issue #24915: Add Clang support to PGO builds and use the test suite for profile data\fP\X'tty: link', \X'tty: link https://github.com/python/cpython/commit/7188a3efe07b9effdb760f3a96783f250214f0be'\fI\%commit 7188a3ef\fP\X'tty: link'\&. .sp Changes of at least 5%: .TS box center; l|l|l. T{ Benchmark T} T{ 2015\-09\-18 (4b363e270108) T} T{ 2015\-09\-18_22\-13 (7188a3efe07b) T} _ T{ pickle T} T{ 33.7 us T} T{ 26.4 us: 1.28x faster (\-22%) T} _ T{ pidigits T} T{ 332 ms T} T{ 286 ms: 1.16x faster (\-14%) T} _ T{ pickle_list T} T{ 9.90 us T} T{ 8.84 us: 1.12x faster (\-11%) T} _ T{ unpickle T} T{ 37.2 us T} T{ 33.3 us: 1.12x faster (\-11%) T} _ T{ unpickle_list T} T{ 11.1 us T} T{ 9.95 us: 1.11x faster (\-10%) T} _ T{ regex_dna T} T{ 330 ms T} T{ 297 ms: 1.11x faster (\-10%) T} _ T{ regex_effbot T} T{ 6.43 ms T} T{ 5.80 ms: 1.11x faster (\-10%) T} _ T{ pickle_dict T} T{ 69.3 us T} T{ 64.1 us: 1.08x faster (\-8%) T} _ T{ mako T} T{ 39.1 ms T} T{ 36.2 ms: 1.08x faster (\-7%) T} _ T{ call_simple T} T{ 12.2 ms T} T{ 11.6 ms: 1.05x faster (\-5%) T} _ T{ genshi_xml T} T{ 175 ms T} T{ 166 ms: 1.05x faster (\-5%) T} .TE .sp Changes of at least 5%, sadly two benchmarks also became slower: .TS box center; l|l|l. T{ Benchmark T} T{ 2015\-09\-18_14\-32\-master\-4b363e270108 T} T{ 2015\-09\-18_22\-13\-master\-7188a3efe07b T} _ T{ unpickle_pure_python T} T{ 776 us T} T{ 821 us: 1.06x slower (+6%) T} _ T{ regex_v8 T} T{ 49.5 ms T} T{ 52.6 ms: 1.06x slower (+6%) T} .TE .SS 2015\-05\-30: C implementation of collections.OrderedDict, html5lib .sp Optimization: \X'tty: link http://bugs.python.org/issue16991'\fI\%Issue #16991: Add a C implementation of collections.OrderedDict\fP\X'tty: link', \X'tty: link https://github.com/python/cpython/commit/96c6af9b207c188c52ac53ce87bb7f2dea3f328b'\fI\%commit 96c6af9b\fP\X'tty: link'\&. .TS box center; l|l|l. T{ Benchmark T} T{ 2015\-05\-02 (3b4d30a27bd6) T} T{ 2015\-06\-01 (41874c570cf3) T} _ T{ html5lib T} T{ 285 ms T} T{ 233 ms: 1.23x faster (\-19%) T} .TE .SS 2015\-05\-23: C implementation of functools.lru_cache(), sympy .sp Optimization: \X'tty: link http://bugs.python.org/issue14373'\fI\%Issue #14373: Added C implementation of functools.lru_cache()\fP\X'tty: link', \X'tty: link https://github.com/python/cpython/commit/1c858c352b8c11419f79f586334c49378726dba8'\fI\%commit 1c858c35\fP\X'tty: link'\&. .sp Changes of at least 5%: .TS box center; l|l|l. T{ Benchmark T} T{ 2015\-05\-23_19\-15\-master\-c70908558d8e T} T{ 2015\-05\-23_19\-42\-master\-1c858c352b8c T} _ T{ sympy_expand T} T{ 1.45 sec T} T{ 1.14 sec: 1.27x faster (\-21%) T} _ T{ sympy_sum T} T{ 308 ms T} T{ 247 ms: 1.25x faster (\-20%) T} _ T{ sympy_str T} T{ 621 ms T} T{ 500 ms: 1.24x faster (\-19%) T} _ T{ sympy_integrate T} T{ 54.2 ms T} T{ 45.7 ms: 1.19x faster (\-16%) T} _ T{ scimark_lu T} T{ 497 ms T} T{ 471 ms: 1.06x faster (\-5%) T} .TE .sp \fBpickle_dict\fP is seen as 1.06x slower, but since pickle doesn\(aqt use functools.lru_cache(), the change is ignored in the table. .SS Slowdown .SS 2016\-09\-11: regex_compile .sp Slowdown: \X'tty: link http://bugs.python.org/issue28082'\fI\%convert re flags to (much friendlier) IntFlag constants (issue #28082)\fP\X'tty: link', \X'tty: link https://github.com/python/cpython/commit/f93395bc5125c99539597bf134ca8bcf9707655b'\fI\%commit f93395bc\fP\X'tty: link'\&. .TS box center; l|l|l|l. T{ Benchmark T} T{ 2016\-04\-01 (6b6abd4cf10e) T} T{ 2016\-07\-01 (355048970b2a) T} T{ 2016\-10\-01 (78a111c7d867) T} _ T{ regex_compile T} T{ 339 ms T} T{ 309 ms: 1.10x faster (\-9%) T} T{ 383 ms: 1.13x slower (+13%) T} .TE .SS Timeline .SS April, 2016 \-> May, 2016 .sp 2016\-04\-01 .. 2016\-05\-01: .TS box center; l|l|l. T{ Benchmark T} T{ 2016\-04\-01 (dcfebb32e277) T} T{ 2016\-05\-01 (f1e2671fdf88) T} _ T{ nqueens T} T{ 255 ms T} T{ 207 ms: 1.23x faster (\-19%) T} _ T{ raytrace T} T{ 1.31 sec T} T{ 1.09 sec: 1.19x faster (\-16%) T} _ T{ float T} T{ 290 ms T} T{ 243 ms: 1.19x faster (\-16%) T} _ T{ chaos T} T{ 273 ms T} T{ 235 ms: 1.16x faster (\-14%) T} _ T{ hexiom T} T{ 21.0 ms T} T{ 18.6 ms: 1.13x faster (\-11%) T} _ T{ deltablue T} T{ 16.4 ms T} T{ 14.6 ms: 1.12x faster (\-11%) T} _ T{ go T} T{ 557 ms T} T{ 502 ms: 1.11x faster (\-10%) T} _ T{ nbody T} T{ 254 ms T} T{ 232 ms: 1.10x faster (\-9%) T} .TE .SS call_method .sp Timeline 2016\-04\-01 .. 2017\-01\-01: .TS box center; l|l|l|l|l. T{ Benchmark T} T{ 2016\-04\-01 (6b6abd4cf10e) T} T{ 2016\-07\-01 (355048970b2a) T} T{ 2016\-10\-01 (78a111c7d867) T} T{ 2017\-01\-01 (67e1aa0b58be) T} _ T{ call_method T} T{ 15.8 ms T} T{ 14.9 ms: 1.06x faster (\-6%) T} T{ 14.1 ms: 1.13x faster (\-11%) T} T{ 11.2 ms: 1.42x faster (\-29%) T} _ T{ call_method_slots T} T{ 15.7 ms T} T{ 15.2 ms: 1.03x faster (\-3%) T} T{ 14.0 ms: 1.13x faster (\-11%) T} T{ 11.1 ms: 1.42x faster (\-29%) T} _ T{ call_method_unknown T} T{ 17.7 ms T} T{ 15.9 ms: 1.11x faster (\-10%) T} T{ 15.6 ms: 1.13x faster (\-11%) T} T{ 14.3 ms: 1.23x faster (\-19%) T} .TE .SS crypto_pyaes .TS box center; l|l|l. T{ Benchmark T} T{ 2016\-04\-01 (master) T} T{ 2016\-05\-01 (master) T} _ T{ crypto_pyaes T} T{ 226 ms T} T{ 205 ms: 1.10x faster (\-9%) T} .TE .sp 2016\-03\-01 .. 2016\-06\-01: .TS box center; l|l|l. T{ Benchmark T} T{ 2016\-03\-01 (13d09afff127) T} T{ 2016\-06\-01 (d80ab7d94578) T} _ T{ crypto_pyaes T} T{ 231 ms T} T{ 199 ms: 1.16x faster (\-14%) T} .TE .SS json_loads .sp Progress on 21 months, 2015\-01\-01 .. 2016\-10\-01: .TS box center; l|l|l. T{ Benchmark T} T{ 2015\-01\-01 (52074ac866eb) T} T{ 2016\-10\-01 (78a111c7d867) T} _ T{ json_loads T} T{ 64.0 us T} T{ 56.6 us: 1.13x faster (\-11%) T} .TE .SS logging_silent .TS box center; l|l|l. T{ Benchmark T} T{ 2016\-01\-01 (899b72cee21c) T} T{ 2016\-07\-01 (355048970b2a) T} _ T{ logging_silent T} T{ 718 ns T} T{ 606 ns: 1.18x faster (\-16%) T} .TE .SS pickle .sp pickle, 2016\-08\-02 .. 2016\-09\-08: .TS box center; l|l|l. T{ Benchmark T} T{ 2016\-08\-02 (133138a284be) T} T{ 2016\-09\-08 (10427f44852b) T} _ T{ pickle T} T{ 25.5 us T} T{ 21.4 us: 1.19x faster (\-16%) T} .TE .sp pickle dict/list: .TS box center; l|l|l. T{ Benchmark T} T{ 2016\-04\-01 (6b6abd4cf10e) T} T{ 2016\-10\-01 (78a111c7d867) T} _ T{ pickle_dict T} T{ 64.5 us T} T{ 57.7 us: 1.12x faster (\-11%) T} _ T{ pickle_list T} T{ 9.06 us T} T{ 7.79 us: 1.16x faster (\-14%) T} .TE .sp unpickle: .TS box center; l|l|l. T{ Benchmark T} T{ 2015\-07\-01 (d7982beca93c) T} T{ 2015\-10\-01 (30b7138fe12b) T} _ T{ unpickle T} T{ 36.9 us T} T{ 32.8 us: 1.13x faster (\-11%) T} .TE .SS python_startup .sp 2015\-04\-01 .. 2015\-10\-01: .TS box center; l|l|l. T{ Benchmark T} T{ 2015\-04\-01 (4fd929b43121) T} T{ 2015\-10\-01 (30b7138fe12b) T} _ T{ python_startup T} T{ 16.4 ms T} T{ 17.2 ms: 1.05x slower (+5%) T} _ T{ python_startup_no_site T} T{ 8.65 ms T} T{ 8.90 ms: 1.03x slower (+3%) T} .TE .sp 2016\-04\-01 .. 2017\-01\-01: .TS box center; l|l|l. T{ Benchmark T} T{ 2016\-04\-01 (6b6abd4cf10e) T} T{ 2017\-01\-01 (67e1aa0b58be) T} _ T{ python_startup T} T{ 17.3 ms T} T{ 14.5 ms: 1.20x faster (\-16%) T} _ T{ python_startup_no_site T} T{ 8.89 ms T} T{ 8.39 ms: 1.06x faster (\-6%) T} .TE .SS regex_compile .TS box center; l|l|l|l. T{ Benchmark T} T{ 2016\-04\-01 (6b6abd4cf10e) T} T{ 2016\-07\-01 (355048970b2a) T} T{ 2016\-10\-01 (78a111c7d867) T} _ T{ regex_compile T} T{ 339 ms T} T{ 309 ms: 1.10x faster (\-9%) T} T{ 383 ms: 1.13x slower (+13%) T} .TE .SS telco .TS box center; l|l|l|l|l|l. T{ Benchmark T} T{ 2016\-01\-01 (899b72cee21c) T} T{ 2016\-04\-01 (6b6abd4cf10e) T} T{ 2016\-07\-01 (355048970b2a) T} T{ 2016\-10\-01 (78a111c7d867) T} T{ 2017\-03\-31 (cdcac039fb44) T} _ T{ telco T} T{ 19.6 ms T} T{ 19.2 ms: 1.02x faster (\-2%) T} T{ 18.3 ms: 1.08x faster (\-7%) T} T{ 15.1 ms: 1.30x faster (\-23%) T} T{ 13.9 ms: 1.41x faster (\-29%) T} .TE .SS scimark .sp 2016\-10\-01 .. 2017\-03\-31: .TS box center; l|l|l|l. T{ Benchmark T} T{ 2016\-10\-01 (78a111c7d867) T} T{ 2017\-01\-01 (67e1aa0b58be) T} T{ 2017\-03\-31 (cdcac039fb44) T} _ T{ scimark_lu T} T{ 423 ms T} T{ 378 ms: 1.12x faster (\-11%) T} T{ 318 ms: 1.33x faster (\-25%) T} _ T{ scimark_sor T} T{ 426 ms T} T{ 403 ms: 1.06x faster (\-5%) T} T{ 375 ms: 1.14x faster (\-12%) T} .TE .SS sqlalchemy_declarative .TS box center; l|l|l. T{ Benchmark T} T{ 2014\-10\-01 (5a789f7eaf81) T} T{ 2015\-10\-01 (30b7138fe12b) T} _ T{ sqlalchemy_declarative T} T{ 345 ms T} T{ 301 ms: 1.15x faster (\-13%) T} .TE .SS sympy .sp 2016\-04\-01 .. 2016\-10\-01: .TS box center; l|l|l|l. T{ Benchmark T} T{ 2016\-04\-01 (6b6abd4cf10e) T} T{ 2016\-07\-01 (355048970b2a) T} T{ 2016\-10\-01 (78a111c7d867) T} _ T{ sympy_expand T} T{ 1.10 sec T} T{ 1.01 sec: 1.09x faster (\-8%) T} T{ 942 ms: 1.17x faster (\-14%) T} _ T{ sympy_integrate T} T{ 46.6 ms T} T{ 42.9 ms: 1.09x faster (\-8%) T} T{ 41.2 ms: 1.13x faster (\-11%) T} _ T{ sympy_sum T} T{ 247 ms T} T{ 233 ms: 1.06x faster (\-6%) T} T{ 199 ms: 1.24x faster (\-19%) T} _ T{ sympy_str T} T{ 483 ms T} T{ 454 ms: 1.07x faster (\-6%) T} T{ 427 ms: 1.13x faster (\-12%) T} .TE .SS xml_etree_generate .TS box center; l|l|l|l|l|l. T{ Benchmark T} T{ 2015\-04\-01 (4fd929b43121) T} T{ 2015\-07\-01 (d7982beca93c) T} T{ 2015\-10\-01 (30b7138fe12b) T} T{ 2016\-01\-01 (899b72cee21c) T} T{ 2016\-07\-01 (355048970b2a) T} _ T{ xml_etree_generate T} T{ 282 ms T} T{ 267 ms: 1.06x faster (\-5%) T} T{ 256 ms: 1.10x faster (\-9%) T} T{ 237 ms: 1.19x faster (\-16%) T} T{ 212 ms: 1.33x faster (\-25%) T} .TE .SH CHANGELOG .SS Version 1.11.0 (2024\-03\-09) .INDENT 0.0 .IP \(bu 2 Add a \-\-same\-loops option to the run command to use the exact same number of loops as a previous run (without recalibrating). .IP \(bu 2 Bump pyperf to 2.6.3 .IP \(bu 2 Fix the django_template benchmark for compatibilty with 3.13 .IP \(bu 2 Fix benchmark.conf.sample .UNINDENT .SS Version 1.10.0 (2023\-10\-22) .INDENT 0.0 .IP \(bu 2 Add benchmark for asyncio_webockets .IP \(bu 2 Expose \-\-min\-time from pyperf to pyperformance CLI .IP \(bu 2 Bump coverage to 7.3.2 for compatibilty with 3.13 .IP \(bu 2 Bump greenlet to 3.0.0rc3 for compatibilty with 3.13 .UNINDENT .SS Version 1.0.9 (2023\-06\-14) .INDENT 0.0 .IP \(bu 2 Vendor lib2to3 for Python 3.13+ .IP \(bu 2 Add TaskGroups variants to async_tree benchmarks .UNINDENT .SS Version 1.0.8 (2023\-06\-02) .INDENT 0.0 .IP \(bu 2 Move the main requirements.txt file to pyperformance/requirements so that dependabot can only run on that one file .IP \(bu 2 Update dependencies of benchmarks not to specify setuptools .IP \(bu 2 On older versions of Python, skip benchmarks that use features introduced in newer Python versions .IP \(bu 2 Support \fB\-\-inherit\-environ\fP when reusing a venv .IP \(bu 2 Use tomllib/tomli over toml .IP \(bu 2 Update MANIFEST.in to include cert files for asyncio_tcp_ssl benchmark .IP \(bu 2 Fix undefined variable issue when raising VenvPipInstallFailedError .IP \(bu 2 Add mypy config; run mypy in CI .IP \(bu 2 Fix typo of str.partition from _pyproject_toml.py .IP \(bu 2 Add version of Richards benchmark that uses super() .IP \(bu 2 Add a benchmark for runtime\-checkable protocols .IP \(bu 2 Extend async tree benchmarks to cover eager task execution .UNINDENT .SS Version 1.0.7 (2023\-04\-22) .INDENT 0.0 .IP \(bu 2 Upgrade pyperf from 2.5.0 to 2.6.0 .IP \(bu 2 Clean unused imports and other small code details .IP \(bu 2 Migrage to the pyproject.toml based project .IP \(bu 2 Fix the django_template benchmark due to lack of distutils .IP \(bu 2 Add benchmark for toml .IP \(bu 2 Add benchmark for comprehensions .IP \(bu 2 Add benchmark for asyncio_tcp_ssl .IP \(bu 2 Add benchmark for asyncio_tcp .IP \(bu 2 Add benchmark for Dask scheduler .IP \(bu 2 Add the gc benchmarks to the MANIFEST file .UNINDENT .SS Version 1.0.6 (2022\-11\-20) .INDENT 0.0 .IP \(bu 2 Upgrade pyperf from 2.4.1 to 2.5.0 .IP \(bu 2 Add a benchmark to measure gc traversal .IP \(bu 2 Add jobs field in compile section to specify make \-j param .IP \(bu 2 Add benchmark for Docutils .IP \(bu 2 Add async_generators benchmark .IP \(bu 2 Add benchmark for IPC .IP \(bu 2 Fix Manifest Group .IP \(bu 2 Fix installing dev build of pyperformance inside compile/compile_all .IP \(bu 2 Always upload, even when some benchmarks fail .IP \(bu 2 Add sqlglot benchmarks .IP \(bu 2 Support reporting geometric mean by tags .IP \(bu 2 Allow for specifying local wheels and sdists as dependencies .IP \(bu 2 Add a benchmark based on \fIpython \-m pprint\fP .IP \(bu 2 Add mdp back into the default group .IP \(bu 2 Add coroutines benchmark .IP \(bu 2 Reduce noise in generators benchmark .IP \(bu 2 Add benchmark for deepcopy .IP \(bu 2 Add coverage benchmark .IP \(bu 2 Add generators benchmark .IP \(bu 2 Add benchmark for async tree workloads .IP \(bu 2 Support relative paths to manifest files .IP \(bu 2 Add support for multiple benchmark groups in a manifest .IP \(bu 2 Fix \-\-inherit\-environ issue .IP \(bu 2 Use working Genshi 0.7.7 .UNINDENT .SS Version 1.0.4 (2022\-01\-25) .INDENT 0.0 .IP \(bu 2 Re\-release support for user\-defined benchmark after fixing problem with virtual environments. .UNINDENT .SS Version 1.0.3 (2021\-12\-20) .INDENT 0.0 .IP \(bu 2 Support user\-defined benchmark suites. .UNINDENT .SS Version 1.0.2 (2021\-05\-11) .INDENT 0.0 .IP \(bu 2 Disable the henshi benchmark temporarily since is no longer compatible with Python 3.11. .IP \(bu 2 Reenable html5lib benchmark: html5lib 1.1 has been released. .IP \(bu 2 Update requirements. .IP \(bu 2 Replace Travis CI with GitHub Actions. .IP \(bu 2 The development branch \fBmaster\fP was renamed to \fBmain\fP\&. See \X'tty: link https://sfconservancy.org/news/2020/jun/23/gitbranchname/'\fI\%https://sfconservancy.org/news/2020/jun/23/gitbranchname/\fP\X'tty: link' for the rationale. .UNINDENT .SS Version 1.0.1 (2020\-03\-26) .INDENT 0.0 .IP \(bu 2 Drop usage of the six module since Python 2 is no longer supported. Remove Python 2 specific code. .IP \(bu 2 Update dependencies: .INDENT 2.0 .IP \(bu 2 django: 3.0 => 3.0.4 .IP \(bu 2 dulwich: 0.19.14 => 0.19.15 .IP \(bu 2 mako: 1.1.0 = > 1.1.2 .IP \(bu 2 mercurial: 5.1.1 => 5.3.1 .IP \(bu 2 psutil: 5.6.7 => 5.7.0 .IP \(bu 2 pyperf: 1.7.0 => 2.0.0 .IP \(bu 2 sqlalchemy: 1.3.12 => 1.3.15 .IP \(bu 2 sympy: 1.5 => 1.5.1 .IP \(bu 2 tornado: 6.0.3 => 6.0.4 .UNINDENT .IP \(bu 2 Remove six, html5lib and mercurial requirements. .IP \(bu 2 pip\-tools (pip\-compile) is now used to update dependencies .UNINDENT .SS Version 1.0.0 (2019\-12\-17) .INDENT 0.0 .IP \(bu 2 Enable pyflate benchmarks on Python 3. .IP \(bu 2 Remove \fBspambayes\fP benchmark: it is not compatible with Python 3. .IP \(bu 2 Remove \fB2n3\fP:benchmark group. .IP \(bu 2 Drop Python 2.7 support: old Django and Tornado versions are not compatible with incoming Python 3.9. .IP \(bu 2 Disable html5lib benchmark temporarily, since it\(aqs no longer compatible with Python 3.9. .IP \(bu 2 Update requirements: .INDENT 2.0 .IP \(bu 2 Django: 1.11.22 => 3.0 .IP \(bu 2 Mako: 1.0.14 => 1.1.0 .IP \(bu 2 SQLAlchemy: 1.3.6 => 1.3.12 .IP \(bu 2 certifi: 2019.6.16 => 2019.11.28 .IP \(bu 2 docutils: 0.15.1 => 0.15.2 .IP \(bu 2 dulwich: 0.19.11 => 0.19.14 .IP \(bu 2 mercurial: 5.0.2 => 5.1.1 .IP \(bu 2 psutil: 5.6. => 5.6.7 .IP \(bu 2 pyperf: 1.6.1 => 1.7.0 .IP \(bu 2 six: 1.12. => 1.13.0 .IP \(bu 2 sympy: 1.4 => 1.5 .UNINDENT .UNINDENT .SS Version 0.9.1 (2019\-07\-29) .INDENT 0.0 .IP \(bu 2 Enable hg_startup on Python 3 .IP \(bu 2 Fix compatibility with Python 3.8 beta 2 .IP \(bu 2 Update requirements: .INDENT 2.0 .IP \(bu 2 certifi: 2019.3.9 => 2019.6.16 .IP \(bu 2 Chameleon: 3.6.1 => 3.6.2 .IP \(bu 2 Django: 1.11.20 => 1.11.22 .IP \(bu 2 docutils: 0.14 => 0.15.1.post1 .IP \(bu 2 Mako: 1.0.10 => 1.0.14 .IP \(bu 2 mercurial: 5.0 => 5.0.2 .IP \(bu 2 pathlib2: 2.3.3 => 2.3.4 .IP \(bu 2 psutil: 5.6.2 => 5.6.3 .IP \(bu 2 SQLAlchemy: 1.3.4 => 1.3.6 .UNINDENT .UNINDENT .SS Version 0.9.0 (2019\-05\-29) .INDENT 0.0 .IP \(bu 2 Project renamed from \(dqperformance\(dq to \(dqpyperformance\(dq .IP \(bu 2 Upgrade pyperf from version 1.6.0 to 1.6.1. The project has been renamed from \(dqperf\(dq to \(dqpyperf\(dq. Update imports. .IP \(bu 2 Issue #54: Update Genshi to 0.7.3. It is now compatible with Python 3.8. .IP \(bu 2 Update requirements: .INDENT 2.0 .IP \(bu 2 Mako: 1.0.9= > 1.0.10 .IP \(bu 2 SQLAlchemy: 1.3.3 => 1.3.4 .UNINDENT .UNINDENT .SS Version 0.8.0 (2019\-05\-10) .INDENT 0.0 .IP \(bu 2 compile command: Add \(dqpkg_only\(dq option to benchmark.conf. Add support for native libraries that are installed but not on path. Patch by Robert Grimm. .IP \(bu 2 Update Travis configuration: use trusty image, use pip cache. Patch by Inada Naoki. .IP \(bu 2 Upgrade tornado to 5.1.1. Patch by Inada Naoki. .IP \(bu 2 Fix compile command on Mac OS: no program extension. Patch by Anthony Shaw. .IP \(bu 2 Update requirements: .INDENT 2.0 .IP \(bu 2 Chameleon: 3.4 => 3.6.1 .IP \(bu 2 Django: 1.11.16 => 1.11.20 .IP \(bu 2 Genshi: 0.7.1 => 0.7.2 .IP \(bu 2 Mako: 1.0.7 => 1.0.9 .IP \(bu 2 MarkupSafe: 1.0 => 1.1.1 .IP \(bu 2 SQLAlchemy: 1.2.12 => 1.3.3 .IP \(bu 2 certifi: 2018.10.15 => 2019.3.9 .IP \(bu 2 dulwich: 0.19.6 => 0.19.11 .IP \(bu 2 mercurial: 4.7.2 => 5.0 .IP \(bu 2 mpmath: 1.0.0 => 1.1.0 .IP \(bu 2 pathlib2: 2.3.2 => 2.3.3 .IP \(bu 2 perf: 1.5.1 => 1.6.0 .IP \(bu 2 psutil: 5.4.7 => 5.6.2 .IP \(bu 2 six: 1.11.0 => 1.12.0 .IP \(bu 2 sympy: 1.3 => 1.4 .IP \(bu 2 tornado: 4.5.3 => 5.1.1 .UNINDENT .UNINDENT .SS Version 0.7.0 (2018\-10\-16) .INDENT 0.0 .IP \(bu 2 python_startup: Add \fB\-\-exit\fP option. .IP \(bu 2 Update requirements: .INDENT 2.0 .IP \(bu 2 certifi: 2017.11.5 => 2018.10.15 .IP \(bu 2 Chameleon: 3.2 => 3.4 .IP \(bu 2 Django: 1.11.9 => 1.11.16 .IP \(bu 2 dulwich: 0.18.6 => 0.19.6 .IP \(bu 2 Genshi: 0.7 => 0.7.1 .IP \(bu 2 mercurial: 4.4.2 => 4.7.2 .IP \(bu 2 pathlib2: 2.3.0 => 2.3.2 .IP \(bu 2 psutil: 5.4.3 => 5.4.7 .IP \(bu 2 SQLAlchemy: 1.2.0 => 1.2.12 .IP \(bu 2 sympy: 1.1.1 => 1.3 .UNINDENT .IP \(bu 2 Fix issue #40 for pip 10 and newer: Remove indirect dependencies. Indirect dependencies were used to install cffi, but Mercurial 4.0 doesn\(aqt depend on cffi anymore. .UNINDENT .SS Version 0.6.1 (2018\-01\-11) .INDENT 0.0 .IP \(bu 2 Fix inherit\-environ: propagate to recursive invocations of \fBperformance\fP in \fBcompile\fP and \fBcompile_all\fP commands. .IP \(bu 2 Fix the \fB\-\-track\-memory\fP option thanks to the update to perf 1.5. .IP \(bu 2 Update requirements .INDENT 2.0 .IP \(bu 2 certifi: 2017.4.17 => 2017.11.5 .IP \(bu 2 Chameleon: 3.1 => 3.2 .IP \(bu 2 Django: 1.11.3 => 1.11.9 .IP \(bu 2 docutils: 0.13.1 => 0.14 .IP \(bu 2 dulwich: 0.17.3 => 0.18.6 .IP \(bu 2 html5lib: 0.999999999 => 1.0.1 .IP \(bu 2 Mako: 1.0.6 => 1.0.7 .IP \(bu 2 mercurial: 4.2.2 => 4.4.2 .IP \(bu 2 mpmath: 0.19 => 1.0.0 .IP \(bu 2 perf: 1.4 => 1.5.1 (fix \fB\-\-track\-memory\fP option) .IP \(bu 2 psutil: 5.2.2 => 5.4.3 .IP \(bu 2 pyaes: 1.6.0 => 1.6.1 .IP \(bu 2 six: 1.10.0 => 1.11.0 .IP \(bu 2 SQLAlchemy: 1.1.11 => 1.2.0 .IP \(bu 2 sympy: 1.0 => 1.1.1 .IP \(bu 2 tornado: 4.5.1 => 4.5.3 .UNINDENT .UNINDENT .SS Version 0.6.0 (2017\-07\-06) .INDENT 0.0 .IP \(bu 2 Change \fBwarn\fP to \fBwarning\fP in \fIbm_logging.py\fP\&. In Python 3, Logger.warn() calls warnings.warn() to log a deprecation warning, so is slower than Logger.warning(). .IP \(bu 2 Add again the \fBlogging_silent\fP microbenchmark suite. .IP \(bu 2 compile command: update the Git repository before getting the revision .IP \(bu 2 Update requirements .INDENT 2.0 .IP \(bu 2 perf: 1.3 => 1.4 (fix parse_cpu_list(): strip also NUL characters) .IP \(bu 2 Django: 1.11.1 => 1.11.3 .IP \(bu 2 mercurial: 4.2 => 4.2.2 .IP \(bu 2 pathlib2: 2.2.1 => 2.3.0 .IP \(bu 2 SQLAlchemy: 1.1.10 => 1.1.11 .UNINDENT .UNINDENT .SS Version 0.5.5 (2017\-05\-29) .INDENT 0.0 .IP \(bu 2 On the 2.x branch on CPython, \fBcompile\fP now pass \fB\-\-enable\-unicode=ucs4\fP to the \fBconfigure\fP script on all platforms, except on Windows which uses UTF\-16 because of its 16\-bit wchar_t. .IP \(bu 2 The \fBfloat\fP benchmark now uses \fB__slots__\fP on the \fBPoint\fP class. .IP \(bu 2 Remove the following microbenchmarks. They have been moved to the \X'tty: link https://github.com/vstinner/pymicrobench'\fI\%pymicrobench\fP\X'tty: link' project because they are too short, not representative of real applications and are too unstable. .INDENT 2.0 .IP \(bu 2 \fBpybench\fP microbenchmark suite .IP \(bu 2 \fBcall_simple\fP .IP \(bu 2 \fBcall_method\fP .IP \(bu 2 \fBcall_method_unknown\fP .IP \(bu 2 \fBcall_method_slots\fP .IP \(bu 2 \fBlogging_silent\fP: values are faster than 1 ns on PyPy with 2^27 loops! (and around 0.7 us on CPython) .UNINDENT .IP \(bu 2 Update requirements .INDENT 2.0 .IP \(bu 2 Django: 1.11 => 1.11.1 .IP \(bu 2 SQLAlchemy: 1.1.9 => 1.1.10 .IP \(bu 2 certifi: 2017.1.23 => 2017.4.17 .IP \(bu 2 perf: 1.2 => 1.3 .IP \(bu 2 mercurial: 4.1.2 => 4.2 .IP \(bu 2 tornado: 4.4.3 => 4.5.1 .UNINDENT .UNINDENT .SS Version 0.5.4 (2017\-04\-10) .INDENT 0.0 .IP \(bu 2 Create a new documentation at: \X'tty: link http://pyperformance.readthedocs.io/'\fI\%http://pyperformance.readthedocs.io/\fP\X'tty: link' .IP \(bu 2 Add \(dqCPython results, 2017\(dq to the doc: significant performance changes, significant optimizations, timeline, etc. .IP \(bu 2 The \fBshow\fP command doesn\(aqt need to create a virtual env anymore. .IP \(bu 2 Add new commands: .INDENT 2.0 .IP \(bu 2 \fBpyperformance compile\fP: compile, install and benchmark .IP \(bu 2 \fBpyperformance compile_all\fP: benchmark multiple branches and revisions of Python .IP \(bu 2 \fBpyperformance upload\fP: upload a JSON file to a Codespeed .UNINDENT .IP \(bu 2 setup.py: add dependencies to \fBperf\fP and \fBsix\fP modules. .IP \(bu 2 bm_xml_etree now uses \(dq_pure_python\(dq in benchmark names if the accelerator is explicitly disabled. .IP \(bu 2 Upgrade requirements: .INDENT 2.0 .IP \(bu 2 Django: 1.10.6 \-> 1.11 .IP \(bu 2 SQLAlchemy: 1.1.6 \-> 1.1.9 .IP \(bu 2 mercurial: 4.1.1 \-> 4.1.2 .IP \(bu 2 perf: 1.1 => 1.2 .IP \(bu 2 psutil: 5.2.1 \-> 5.2.2 .IP \(bu 2 tornado: 4.4.2 \-> 4.4.3 .IP \(bu 2 webencodings: 0.5 \-> 0.5.1 .UNINDENT .IP \(bu 2 perf 1.2 now calibrates the number of warmups on PyPy. .IP \(bu 2 On Python 3.5a0: force pip 7.1.2 and setuptools 18.5: \X'tty: link https://sourceforge.net/p/pyparsing/bugs/100/'\fI\%https://sourceforge.net/p/pyparsing/bugs/100/\fP\X'tty: link' .UNINDENT .SS Version 0.5.3 (2017\-03\-27) .INDENT 0.0 .IP \(bu 2 Upgrade Dulwich to 0.17.3 to support PyPy older than 5.6: see \X'tty: link https://github.com/jelmer/dulwich/issues/509'\fI\%https://github.com/jelmer/dulwich/issues/509\fP\X'tty: link' .IP \(bu 2 Fix ResourceWarning warnings: close explicitly files and sockets. .IP \(bu 2 scripts: replace Mercurial commands with Git commands. .IP \(bu 2 Upgrade requirements: .INDENT 2.0 .IP \(bu 2 dulwich: 0.17.1 => 0.17.3 .IP \(bu 2 perf: 1.0 => 1.1 .IP \(bu 2 psutil: 5.2.0 => 5.2.1 .UNINDENT .UNINDENT .SS Version 0.5.2 (2017\-03\-17) .INDENT 0.0 .IP \(bu 2 Upgrade requirements: .INDENT 2.0 .IP \(bu 2 certifi: 2016.9.26 => 2017.1.23 .IP \(bu 2 Chameleon: 3.0 => 3.1 .IP \(bu 2 Django: 1.10.5 => 1.10.6 .IP \(bu 2 MarkupSafe: 0.23 => 1.0 .IP \(bu 2 dulwich: 0.16.3 => 0.17.1 .IP \(bu 2 mercurial: 4.0.2 => 4.1.1 .IP \(bu 2 pathlib2: 2.2.0 => 2.2.1 .IP \(bu 2 perf: 0.9.3 => 1.0 .IP \(bu 2 psutil: 5.0.1 => 5.2.0 .IP \(bu 2 SQLAlchemy: 1.1.4 => 1.1.6 .UNINDENT .UNINDENT .SS Version 0.5.1 (2017\-01\-16) .INDENT 0.0 .IP \(bu 2 Fix Windows support (upgrade perf from 0.9.0 to 0.9.3) .IP \(bu 2 Upgrade requirements: .INDENT 2.0 .IP \(bu 2 Chameleon: 2.25 => 3.0 .IP \(bu 2 Django: 1.10.3 => 1.10.5 .IP \(bu 2 docutils: 0.12 => 0.13.1 .IP \(bu 2 dulwich: 0.15.0 => 0.16.3 .IP \(bu 2 mercurial: 4.0.0 => 4.0.2 .IP \(bu 2 perf: 0.9.0 => 0.9.3 .IP \(bu 2 psutil: 5.0.0 => 5.0.1 .UNINDENT .UNINDENT .SS Version 0.5.0 (2016\-11\-16) .INDENT 0.0 .IP \(bu 2 Add \fBmdp\fP benchmark: battle with damages and topological sorting of nodes in a graph .IP \(bu 2 The \fBdefault\fP benchmark group now include all benchmarks but \fBpybench\fP .IP \(bu 2 If a benchmark fails, log an error, continue to execute following benchmarks, but exit with error code 1. .IP \(bu 2 Remove deprecated benchmarks: \fBthreading_threaded_count\fP and \fBthreading_iterative_count\fP\&. It wasn\(aqt possible to run them anyway. .IP \(bu 2 \fBdulwich\fP requirement is now optional since its installation fails on Windows. .IP \(bu 2 Upgrade requirements: .INDENT 2.0 .IP \(bu 2 Mako: 1.0.5 => 1.0.6 .IP \(bu 2 Mercurial: 3.9.2 => 4.0.0 .IP \(bu 2 SQLAlchemy: 1.1.3 => 1.1.4 .IP \(bu 2 backports\-abc: 0.4 => 0.5 .UNINDENT .UNINDENT .SS Version 0.4.0 (2016\-11\-07) .INDENT 0.0 .IP \(bu 2 Add \fBsqlalchemy_imperative\fP benchmark: it wasn\(aqt registered properly .IP \(bu 2 The \fBlist\fP command now only lists the benchmark that the \fBrun\fP command will run. The \fBlist\fP command gets a new \fB\-b/\-\-benchmarks\fP option. .IP \(bu 2 Rewrite the code creating the virtual environment to test correctly pip. Download and run \fBget\-pip.py\fP if pip installation failed. .IP \(bu 2 Upgrade requirements: .INDENT 2.0 .IP \(bu 2 perf: 0.8.2 => 0.9.0 .IP \(bu 2 Django: 1.10.2 => 1.10.3 .IP \(bu 2 Mako: 1.0.4 => 1.0.5 .IP \(bu 2 psutil: 4.3.1 => 5.0.0 .IP \(bu 2 SQLAlchemy: 1.1.2 => 1.1.3 .UNINDENT .IP \(bu 2 Remove \fBvirtualenv\fP dependency .UNINDENT .SS Version 0.3.2 (2016\-10\-19) .INDENT 0.0 .IP \(bu 2 Fix setup.py: include also \fBperformance/benchmarks/data/asyncio.git/\fP .UNINDENT .SS Version 0.3.1 (2016\-10\-19) .INDENT 0.0 .IP \(bu 2 Add \fBregex_dna\fP benchmark .IP \(bu 2 The \fBrun\fP command now fails with an error if no benchmark was run. .IP \(bu 2 genshi, logging, scimark, sympy and xml_etree scripts now run all sub\-benchmarks by default .IP \(bu 2 Rewrite pybench using perf: remove the old legacy code to calibrate and run benchmarks, reuse perf.Runner API. .IP \(bu 2 Change heuristic to create the virtual environment, tried commands: .INDENT 2.0 .IP \(bu 2 \fBpython \-m venv\fP .IP \(bu 2 \fBpython \-m virtualenv\fP .IP \(bu 2 \fBvirtualenv \-p python\fP .UNINDENT .IP \(bu 2 The creation of the virtual environment now ensures that pip works to detect \(dqpython3 \-m venv\(dq which doesn\(aqt install pip. .IP \(bu 2 Upgrade perf dependency from 0.7.12 to 0.8.2: update all benchmarks to the new perf 0.8 API (which introduces incompatible changes) .IP \(bu 2 Update SQLAlchemy from 1.1.1 to 1.1.2 .UNINDENT .SS Version 0.3.0 (2016\-10\-11) .sp New benchmarks: .INDENT 0.0 .IP \(bu 2 Add \fBcrypto_pyaes\fP: Benchmark a pure\-Python implementation of the AES block\-cipher in CTR mode using the pyaes module (version 1.6.0). Add \fBpyaes\fP dependency. .IP \(bu 2 Add \fBsympy\fP: Benchmark on SymPy. Add \fBscipy\fP dependency. .IP \(bu 2 Add \fBscimark\fP benchmark .IP \(bu 2 Add \fBdeltablue\fP: DeltaBlue benchmark .IP \(bu 2 Add \fBdulwich_log\fP: Iterate on commits of the asyncio Git repository using the Dulwich module. Add \fBdulwich\fP (and \fBmpmath\fP) dependencies. .IP \(bu 2 Add \fBpyflate\fP: Pyflate benchmark, tar/bzip2 decompressor in pure Python .IP \(bu 2 Add \fBsqlite_synth\fP benchmark: Benchmark Python aggregate for SQLite .IP \(bu 2 Add \fBgenshi\fP benchmark: Render template to XML or plain text using the Genshi module. Add \fBGenshi\fP dependency. .IP \(bu 2 Add \fBsqlalchemy_declarative\fP and \fBsqlalchemy_imperative\fP benchmarks: SQLAlchemy Declarative and Imperative benchmarks using SQLite. Add \fBSQLAlchemy\fP dependency. .UNINDENT .sp Enhancements: .INDENT 0.0 .IP \(bu 2 \fBcompare\fP command now fails if the performance versions are different .IP \(bu 2 \fBnbody\fP: add \fB\-\-reference\fP and \fB\-\-iterations\fP command line options. .IP \(bu 2 \fBchaos\fP: add \fB\-\-width\fP, \fB\-\-height\fP, \fB\-\-thickness\fP, \fB\-\-filename\fP and \fB\-\-rng\-seed\fP command line options .IP \(bu 2 \fBdjango_template\fP: add \fB\-\-table\-size\fP command line option .IP \(bu 2 \fBjson_dumps\fP: add \fB\-\-cases\fP command line option .IP \(bu 2 \fBpidigits\fP: add \fB\-\-digits\fP command line option .IP \(bu 2 \fBraytrace\fP: add \fB\-\-width\fP, \fB\-\-height\fP and \fB\-\-filename\fP command line options .IP \(bu 2 Port \fBhtml5lib\fP benchmark to Python 3 .IP \(bu 2 Enable \fBpickle_pure_python\fP and \fBunpickle_pure_python\fP on Python 3 (code was already compatible with Python 3) .IP \(bu 2 Creating the virtual environment doesn\(aqt inherit environment variables (especially \fBPYTHONPATH\fP) by default anymore: \fB\-\-inherit\-environ\fP command line option must now be used explicitly. .UNINDENT .sp Bugfixes: .INDENT 0.0 .IP \(bu 2 \fBchaos\fP benchmark now also reset the \fBrandom\fP module at each sample to get more reproductible benchmark results .IP \(bu 2 Logging benchmarks now truncate the in\-memory stream before each benchmark run .UNINDENT .sp Rename benchmarks: .INDENT 0.0 .IP \(bu 2 Rename benchmarks to get a consistent name between the command line and benchmark name in the JSON file. .IP \(bu 2 Rename pickle benchmarks: .INDENT 2.0 .INDENT 3.5 .INDENT 0.0 .IP \(bu 2 \fBslowpickle\fP becomes \fBpickle_pure_python\fP .IP \(bu 2 \fBslowunpickle\fP becomes \fBunpickle_pure_python\fP .IP \(bu 2 \fBfastpickle\fP becomes \fBpickle\fP .IP \(bu 2 \fBfastunpickle\fP becomes \fBunpickle\fP .UNINDENT .UNINDENT .UNINDENT .UNINDENT .INDENT 0.0 .INDENT 3.5 .INDENT 0.0 .IP \(bu 2 Rename ElementTree benchmarks: replace \fBetree_\fP prefix with \fBxml_etree_\fP\&. .IP \(bu 2 Rename \fBhexiom2\fP to \fBhexiom_level25\fP and explicitly pass \fB\-\-level=25\fP parameter .IP \(bu 2 Rename \fBjson_load\fP to \fBjson_loads\fP .IP \(bu 2 Rename \fBjson_dump_v2\fP to \fBjson_dumps\fP (and remove the deprecated \fBjson_dump\fP benchmark) .IP \(bu 2 Rename \fBnormal_startup\fP to \fBpython_startup\fP, and \fBstartup_nosite\fP to \fBpython_startup_no_site\fP .IP \(bu 2 Rename \fBthreaded_count\fP to \fBthreading_threaded_count\fP, rename \fBiterative_count\fP to \fBthreading_iterative_count\fP .IP \(bu 2 Rename logging benchmarks: .INDENT 2.0 .IP \(bu 2 \fBsilent_logging\fP to \fBlogging_silent\fP .IP \(bu 2 \fBsimple_logging\fP to \fBlogging_simple\fP .IP \(bu 2 \fBformatted_logging\fP to \fBlogging_format\fP .UNINDENT .UNINDENT .UNINDENT .UNINDENT .sp Minor changes: .INDENT 0.0 .IP \(bu 2 Update dependencies .IP \(bu 2 Remove broken \fB\-\-args\fP command line option. .UNINDENT .SS Version 0.2.2 (2016\-09\-19) .INDENT 0.0 .IP \(bu 2 Add a new \fBshow\fP command to display a benchmark file .IP \(bu 2 Issue #11: Display Python version in compare. Display also the performance version. .IP \(bu 2 CPython issue #26383; csv output: don\(aqt truncate digits for timings shorter than 1 us .IP \(bu 2 compare: Use sample unit of benchmarks, format values in the table output using the unit .IP \(bu 2 compare: Fix the table output if benchmarks only contain a single sample .IP \(bu 2 Remove unused \-C/\-\-control_label and \-E/\-\-experiment_label options .IP \(bu 2 Update perf dependency to 0.7.11 to get Benchmark.get_unit() and BenchmarkSuite.get_metadata() .UNINDENT .SS Version 0.2.1 (2016\-09\-10) .INDENT 0.0 .IP \(bu 2 Add \fB\-\-csv\fP option to the \fBcompare\fP command .IP \(bu 2 Fix \fBcompare \-O table\fP output format .IP \(bu 2 Freeze indirect dependencies in requirements.txt .IP \(bu 2 \fBrun\fP: add \fB\-\-track\-memory\fP option to track the memory peak usage .IP \(bu 2 Update perf dependency to 0.7.8 to support memory tracking and the new \fB\-\-inherit\-environ\fP command line option .IP \(bu 2 If \fBvirtualenv\fP command fail, try another command to create the virtual environment: catch \fBvirtualenv\fP error .IP \(bu 2 The first command to upgrade pip to version \fB>= 6.0\fP now uses the \fBpip\fP binary rather than \fBpython \-m pip\fP to support pip 1.0 which doesn\(aqt support \fBpython \-m pip\fP CLI. .IP \(bu 2 Update Django (1.10.1), Mercurial (3.9.1) and psutil (4.3.1) .IP \(bu 2 Rename \fB\-\-inherit_env\fP command line option to \fB\-\-inherit\-environ\fP and fix it .UNINDENT .SS Version 0.2 (2016\-09\-01) .INDENT 0.0 .IP \(bu 2 Update Django dependency to 1.10 .IP \(bu 2 Update Chameleon dependency to 2.24 .IP \(bu 2 Add the \fB\-\-venv\fP command line option .IP \(bu 2 Convert Python startup, Mercurial startup and 2to3 benchmarks to perf scripts (bm_startup.py, bm_hg_startup.py and bm_2to3.py) .IP \(bu 2 Pass the \fB\-\-affinity\fP option to perf scripts rather than using the \fBtaskset\fP command .IP \(bu 2 Put more installer and optional requirements into \fBperformance/requirements.txt\fP .IP \(bu 2 Cached \fB\&.pyc\fP files are no more removed before running a benchmark. Use \fBvenv recreate\fP command to update a virtual environment if required. .IP \(bu 2 The broken \fB\-\-track_memory\fP option has been removed. It will be added back when it will be fixed. .IP \(bu 2 Add performance version to metadata .IP \(bu 2 Upgrade perf dependency to 0.7.5 to get \fBBenchmark.update_metadata()\fP .UNINDENT .SS Version 0.1.2 (2016\-08\-27) .INDENT 0.0 .IP \(bu 2 Windows is now supported .IP \(bu 2 Add a new \fBvenv\fP command to show, create, recrete or remove the virtual environment. .IP \(bu 2 Fix pybench benchmark (update to perf 0.7.4 API) .IP \(bu 2 performance now tries to install the \fBpsutil\fP module on CPython for better system metrics in metadata and CPU pinning on Python 2. .IP \(bu 2 The creation of the virtual environment now also tries \fBvirtualenv\fP and \fBvenv\fP Python modules, not only the virtualenv command. .IP \(bu 2 The development version of performance now installs performance with \(dqpip install \-e \(dq .IP \(bu 2 The GitHub project was renamed from \fBpython/benchmarks\fP to \fBpython/performance\fP\&. .UNINDENT .SS Version 0.1.1 (2016\-08\-24) .INDENT 0.0 .IP \(bu 2 Fix the creation of the virtual environment .IP \(bu 2 Rename pybenchmarks script to pyperformance .IP \(bu 2 Add \-p/\-\-python command line option .IP \(bu 2 Add __main__ module to be able to run: python3 \-m performance .UNINDENT .SS Version 0.1 (2016\-08\-24) .INDENT 0.0 .IP \(bu 2 First release after the conversion to the perf module and move to GitHub .IP \(bu 2 Removed benchmarks .INDENT 2.0 .IP \(bu 2 django_v2, django_v3 .IP \(bu 2 rietveld .IP \(bu 2 spitfire (and psyco): Spitfire is not available on PyPI .IP \(bu 2 pystone .IP \(bu 2 gcbench .IP \(bu 2 tuple_gc_hell .UNINDENT .UNINDENT .SS History .sp Projected moved to \X'tty: link https://github.com/python/performance'\fI\%https://github.com/python/performance\fP\X'tty: link' in August 2016. Files reorganized, benchmarks patched to use the perf module to run benchmark in multiple processes. .sp Project started in December 2008 by Collin Winter and Jeffrey Yasskin for the Unladen Swallow project. The project was hosted at \X'tty: link https://hg.python.org/benchmarks'\fI\%https://hg.python.org/benchmarks\fP\X'tty: link' until Feb 2016 .sp Other Python Benchmarks: .INDENT 0.0 .IP \(bu 2 CPython: \X'tty: link https://speed.python.org/'\fI\%speed.python.org\fP\X'tty: link' uses pyperf, pyperformance and \X'tty: link https://github.com/tobami/codespeed/'\fI\%Codespeed\fP\X'tty: link' (Django web application) .IP \(bu 2 PyPy: \X'tty: link http://speed.pypy.org/'\fI\%speed.pypy.org\fP\X'tty: link' uses \X'tty: link https://bitbucket.org/pypy/benchmarks'\fI\%PyPy benchmarks\fP\X'tty: link' .IP \(bu 2 Pyston: \X'tty: link https://github.com/dropbox/pyston-perf'\fI\%pyston\-perf\fP\X'tty: link' and \X'tty: link http://speed.pyston.org/'\fI\%speed.pyston.org\fP\X'tty: link' .IP \(bu 2 \X'tty: link http://numba.pydata.org/numba-benchmark/'\fI\%Numba benchmarks\fP\X'tty: link' .IP \(bu 2 Cython: \X'tty: link https://github.com/cython/cython/tree/master/Demos/benchmarks'\fI\%Cython Demos/benchmarks\fP\X'tty: link' .IP \(bu 2 pythran: \X'tty: link https://github.com/serge-sans-paille/numpy-benchmarks'\fI\%numpy\-benchmarks\fP\X'tty: link' .UNINDENT .sp See also the \X'tty: link https://mail.python.org/mailman/listinfo/speed'\fI\%Python speed mailing list\fP\X'tty: link' and the \X'tty: link http://pyperf.readthedocs.io/'\fI\%Python pyperf module\fP\X'tty: link' (used by pyperformance). .sp pyperformance is not tuned for PyPy yet: use the \X'tty: link https://foss.heptapod.net/pypy/benchmarks'\fI\%PyPy benchmarks project\fP\X'tty: link' instead to measure PyPy performances. .sp Image generated by bm_raytrace (pure Python raytrace): [image: Pure Python raytracer] [image] .SH AUTHOR Victor Stinner .SH COPYRIGHT 2017, Victor Stinner .\" Generated by docutils manpage writer. .