PYTHONPERFORMANCEBENCHMARKSUITE(1) Python Performance Benchmark Suite NAME pythonperformancebenchmarksuite - Python Performance Benchmark Suite Documentation The pyperformance project is intended to be an authoritative source of benchmarks for all Python implementations. The focus is on real-world benchmarks, rather than synthetic benchmarks, using whole applications when possible. o pyperformance documentation o pyperformance GitHub project (source code, issues) o Download pyperformance on PyPI pyperformance is distributed under the MIT license. Documentation: USAGE Installation Command to install pyperformance: python3 -m pip install pyperformance The command installs a new pyperformance program. If needed, pyperf and six dependencies are installed automatically. pyperformance works on Python 3.6 and newer, but it may work on Python 3.4 and 3.5. At runtime, Python development files (header files) may be needed to install some dependencies like dulwich_log or psutil, to build their C extension. Commands on Fedora to install dependencies: o Python 3: sudo dnf install python3-devel o PyPy: sudo dnf install pypy-devel Windows notes On Windows, to allow pyperformance to build dependencies from source like greenlet, dulwich or psutil, if you want to use a python.exe built from source, you should not use the python.exe directly. Instead, you must run the little-known command PC\layout to create a filesystem layout that resembles an installed Python: .\python.bat -m PC.layout --preset-default --copy installed -v (Use the --help flag for more info about PC\layout.) Now you can use the "installed" Python executable: installed\python.exe -m pip install pyperformance installed\python.exe -m pyperformance run ... Using an actually installed Python executable (e.g. via py) works fine too. Run benchmarks Commands to compare Python 3.6 and Python 3.7 performance: pyperformance run --python=python3.6 -o py36.json pyperformance run --python=python3.7 -o py38.json pyperformance compare py36.json py38.json Note: python3 -m pyperformance ... syntax works as well (ex: python3 -m pyperformance run -o py38.json), but requires to install pyperformance on each tested Python version. JSON files are produced by the pyperf module and so can be analyzed using pyperf commands: python3 -m pyperf show py36.json python3 -m pyperf check py36.json python3 -m pyperf metadata py36.json python3 -m pyperf stats py36.json python3 -m pyperf hist py36.json python3 -m pyperf dump py36.json (...) It's also possible to use pyperf to compare results of two JSON files: python3 -m pyperf compare_to py36.json py38.json --table Basic commands pyperformance actions: run Run benchmarks on the running python show Display a benchmark file compare Compare two benchmark files list List benchmarks of the running Python list_groups List benchmark groups of the running Python venv Actions on the virtual environment Common options Options available to all commands: -h, --help show this help message and exit run Run benchmarks on the running python. Usage: pyperformance run [-h] [-r] [-f] [--debug-single-value] [-v] [-m] [--affinity CPU_LIST] [-o FILENAME] [--append FILENAME] [--manifest MANIFEST] [-b BM_LIST] [--inherit-environ VAR_LIST] [-p PYTHON] options: -h, --help show this help message and exit -r, --rigorous Spend longer running tests to get more accurate results -f, --fast Get rough answers quickly --debug-single-value Debug: fastest mode, only compute a single value -v, --verbose Print more output -m, --track-memory Track memory usage. This only works on Linux. --affinity CPU_LIST Specify CPU affinity for benchmark runs. This way, benchmarks can be forced to run on a given CPU to minimize run to run variation. -o FILENAME, --output FILENAME Run the benchmarks on only one interpreter and write benchmark into FILENAME. Provide only baseline_python, not changed_python. --append FILENAME Add runs to an existing file, or create it if it doesn't exist --manifest MANIFEST benchmark manifest file to use -b BM_LIST, --benchmarks BM_LIST Comma-separated list of benchmarks to run. Can contain both positive and negative arguments: --benchmarks=run_this,also_this,-not_this. If there are no positive arguments, we'll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments. --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python) --same-loops SAME_LOOPS Use the same number of loops as a previous run (i.e., don't recalibrate). Should be a path to a .json file from a previous run. show Display a benchmark file. Usage: show FILENAME positional arguments: FILENAME compare Compare two benchmark files. Usage: pyperformance compare [-h] [-v] [-O STYLE] [--csv CSV_FILE] [--inherit-environ VAR_LIST] [-p PYTHON] baseline_file.json changed_file.json positional arguments: baseline_file.json changed_file.json options: -v, --verbose Print more output -O STYLE, --output_style STYLE What style the benchmark output should take. Valid options are 'normal' and 'table'. Default is normal. --csv CSV_FILE Name of a file the results will be written to, as a three-column CSV file containing minimum runtimes for each benchmark. --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python) list List benchmarks of the running Python. Usage: pyperformance list [-h] [--manifest MANIFEST] [-b BM_LIST] [--inherit-environ VAR_LIST] [-p PYTHON] options: --manifest MANIFEST benchmark manifest file to use -b BM_LIST, --benchmarks BM_LIST Comma-separated list of benchmarks to run. Can contain both positive and negative arguments: --benchmarks=run_this,also_this,-not_this. If there are no positive arguments, we'll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments. --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python) Use python3 -m pyperformance list -b all to list all benchmarks. list_groups List benchmark groups of the running Python. Usage: pyperformance list_groups [-h] [--manifest MANIFEST] [--inherit-environ VAR_LIST] [-p PYTHON] options: --manifest MANIFEST benchmark manifest file to use --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python) venv Actions on the virtual environment. Actions: show Display the path to the virtual environment and its status (created or not) create Create the virtual environment recreate Force the recreation of the the virtual environment remove Remove the virtual environment Common options: --venv VENV Path to the virtual environment --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python) venv show Display the path to the virtual environment and its status (created or not). Usage: pyperformance venv show [-h] [--venv VENV] [--inherit-environ VAR_LIST] [-p PYTHON] venv create Create the virtual environment. Usage: pyperformance venv create [-h] [--venv VENV] [--manifest MANIFEST] [-b BM_LIST] [--inherit-environ VAR_LIST] [-p PYTHON] options: --manifest MANIFEST benchmark manifest file to use -b BM_LIST, --benchmarks BM_LIST Comma-separated list of benchmarks to run. Can contain both positive and negative arguments: --benchmarks=run_this,also_this,-not_this. If there are no positive arguments, we'll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments. venv recreate Force the recreation of the the virtual environment. Usage: pyperformance venv recreate [-h] [--venv VENV] [--manifest MANIFEST] [-b BM_LIST] [--inherit-environ VAR_LIST] [-p PYTHON] options: --manifest MANIFEST benchmark manifest file to use -b BM_LIST, --benchmarks BM_LIST Comma-separated list of benchmarks to run. Can contain both positive and negative arguments: --benchmarks=run_this,also_this,-not_this. If there are no positive arguments, we'll run all benchmarks except the negative arguments. Otherwise we run only the positive arguments. venv remove Remove the virtual environment. Usage: pyperformance venv remove [-h] [--venv VENV] [--inherit-environ VAR_LIST] [-p PYTHON] Compile Python to run benchmarks pyperformance actions: compile Compile and install CPython and run benchmarks on installed Python compile_all Compile and install CPython and run benchmarks on installed Python on all branches and revisions of CONFIG_FILE upload Upload JSON results to a Codespeed website All these commands require a configuration file. Simple configuration usable for compile (but not for compile_all nor upload), doc/benchmark.conf: [config] json_dir = ~/prog/python/bench_json [scm] repo_dir = ~/prog/python/master update = True [compile] bench_dir = ~/prog/python/bench_tmpdir [run_benchmark] system_tune = True affinity = 2,3 Configuration file sample with comments, doc/benchmark.conf.sample: [config] # Directory where JSON files are written. # - uploaded files are moved to json_dir/uploaded/ # - results of patched Python are written into json_dir/patch/ json_dir = ~/json # If True, compile CPython in debug mode (LTO and PGO disabled), # run benchmarks with --debug-single-sample, and disable upload. # # Use this option to quickly test a configuration. debug = False [scm] # Directory of CPython source code (Git repository) repo_dir = ~/cpython # Update the Git repository (git fetch)? update = True # Name of the Git remote, used to create revision of # the Git branch. For example, use revision 'remotes/origin/3.6' # for the branch '3.6'. git_remote = remotes/origin [compile] # Create files into bench_dir: # - bench_dir/bench-xxx.log # - bench_dir/prefix/: where Python is installed # - bench_dir/venv/: Virtual environment used by pyperformance bench_dir = ~/bench_tmpdir # Link Time Optimization (LTO)? lto = True # Profiled Guided Optimization (PGO)? pgo = True # The space-separated list of libraries that are package-only, # i.e., locally installed but not on header and library paths. # For each such library, determine the install path and add an # appropriate subpath to CFLAGS and LDFLAGS declarations passed # to configure. As an exception, the prefix for openssl, if that # library is present here, is passed via the --with-openssl # option. Currently, this only works with Homebrew on macOS. # If running on macOS with Homebrew, you probably want to use: # pkg_only = openssl readline sqlite3 xz zlib # The version of zlib shipping with macOS probably works as well, # as long as Apple's SDK headers are installed. pkg_only = # Install Python? If false, run Python from the build directory # # WARNING: Running Python from the build directory introduces subtle changes # compared to running an installed Python. Moreover, creating a virtual # environment using a Python run from the build directory fails in many cases, # especially on Python older than 3.4. Only disable installation if you # really understand what you are doing! install = True # Specify '-j' parameter in 'make' command jobs = 8 [run_benchmark] # Run "sudo python3 -m pyperf system tune" before running benchmarks? system_tune = True # --manifest option for 'pyperformance run' manifest = # --benchmarks option for 'pyperformance run' benchmarks = # --affinity option for 'pyperf system tune' and 'pyperformance run' affinity = # Upload generated JSON file? # # Upload is disabled on patched Python, in debug mode or if install is # disabled. upload = False # Configuration to upload results to a Codespeed website [upload] url = environment = executable = project = [compile_all] # List of CPython Git branches branches = default 3.6 3.5 2.7 # List of revisions to benchmark by compile_all [compile_all_revisions] # list of 'sha1=' (default branch: 'master') or 'sha1=branch' # used by the "pyperformance compile_all" command # e.g.: 11159d2c9d6616497ef4cc62953a5c3cc8454afb = compile Compile Python, install Python and run benchmarks on the installed Python. Usage: pyperformance compile [-h] [--patch PATCH] [-U] [-T] [--inherit-environ VAR_LIST] [-p PYTHON] config_file revision [branch] positional arguments: config_file Configuration filename revision Python benchmarked revision branch Git branch options: --patch PATCH Patch file -U, --no-update Don't update the Git repository -T, --no-tune Don't run 'pyperf system tune' to tune the system for benchmarks --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python) Notes: o PGO is broken on Ubuntu 14.04 LTS with GCC 4.8.4-2ubuntu1~14.04: Modules/socketmodule.c:7743:1: internal compiler error: in edge_badness, at ipa-inline.c:895 compile_all Compile all branches and revisions of CONFIG_FILE. Usage: pyperformance compile_all [-h] [--inherit-environ VAR_LIST] [-p PYTHON] config_file positional arguments: config_file Configuration filename options: --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python) upload Upload results from a JSON file to a Codespeed website. Usage: pyperformance upload [-h] [--inherit-environ VAR_LIST] [-p PYTHON] config_file json_file positional arguments: config_file Configuration filename json_file JSON filename options: --inherit-environ VAR_LIST Comma-separated list of environment variable names that are inherited from the parent environment when running benchmarking subprocesses. -p PYTHON, --python PYTHON Python executable (default: use running Python) How to get stable benchmarks o Run python3 -m pyperf system tune command o Compile Python using LTO (Link Time Optimization) and PGO (profile guided optimizations): use the pyperformance compile command with uses LTO and PGO by default o See advices of the pyperf documentation: How to get reproductible benchmark results. pyperformance virtual environment To run benchmarks, pyperformance first creates a virtual environment. It installs requirements with fixed versions to get a reproductible environment. The system Python has unknown module installed with unknown versions, and can have .pth files run at Python startup which can modify Python behaviour or at least slow down Python startup. What is the goal of pyperformance A benchmark is always written for a specific purpose. Depending how the benchmark is written and how the benchmark is run, the result can be different and so have a different meaning. The pyperformance benchmark suite has multiple goals: o Help to detect performance regression in a Python implementation o Validate that an optimization change makes Python faster and don't performance regressions, or only minor regressions o Compare two implementations of Python, for example CPython and PyPy o Showcase of Python performance which ideally would be representative of performances of applications running on production Don't disable GC nor ASLR The pyperf module and pyperformance benchmarks are designed to produce reproductible results, but not at the price of running benchmarks in a special mode which would not be used to run applications in production. For these reasons, the Python garbage collector, Python randomized hash function and system ASLR (Address Space Layout Randomization) are not disabled. Benchmarks don't call gc.collect() neither since CPython implements it with stop-the-world and so applications don't call it to not kill performances. Include outliers and spikes Moreover, while the pyperf documentation explains how to reduce the random noise of the system and other applications, some benchmarks use the system and so can get different timing depending on the system workload, depending on I/O performances, etc. Outliers and temporary spikes in results are not automatically removed: values are summarized by computing the average (arithmetic mean) and standard deviation which "contains" these spikes, instead of using median and the median absolute deviation for example which to ignore outliers. It is deliberate choice since applications running in production are impacted by such temporary slowdown caused by various things like a garbage collection or a JIT compilation. Warmups and steady state A borderline issue are the benchmarks "warmups". The first values of each worker process are always slower: 10% slower in the best case, it can be 1000% slower or more on PyPy. Right now (2017-04-14), pyperformance ignore first values considered as warmup until a benchmark reachs its "steady state". The "steady state" can include temporary spikes every 5 values (ex: caused by the garbage collector), and it can still imply further JIT compiler optimizations but with a "low" impact on the average pyperformance. To be clear "warmup" and "steady state" are a work-in-progress and a very complex topic, especially on PyPy and its JIT compiler. Notes Tool for comparing the performance of two Python implementations. pyperformance will run Student's two-tailed T test on the benchmark results at the 95% confidence level to indicate whether the observed difference is statistically significant. Omitting the -b option will result in the default group of benchmarks being run Omitting -b is the same as specifying -b default. To run every benchmark pyperformance knows about, use -b all. To see a full list of all available benchmarks, use --help. Negative benchmarks specifications are also supported: -b -2to3 will run every benchmark in the default group except for 2to3 (this is the same as -b default,-2to3). -b all,-django will run all benchmarks except the Django templates benchmark. Negative groups (e.g., -b -default) are not supported. Positive benchmarks are parsed before the negative benchmarks are subtracted. If --track_memory is passed, pyperformance will continuously sample the benchmark's memory usage. This currently only works on Linux 2.6.16 and higher or Windows with PyWin32. Because --track_memory introduces performance jitter while collecting memory measurements, only memory usage is reported in the final report. BENCHMARKS Also see Custom Benchmarks regarding how to create your own benchmarks or use someone else's. Available Groups Like individual benchmarks (see "Available benchmarks" below), benchmarks group are allowed after the -b option. Use python3 -m pyperformance list_groups to list groups and their benchmarks. Available benchmark groups: o all: Group including all benchmarks o apps: "High-level" applicative benchmarks (2to3, Chameleon, Tornado HTTP) o default: Group of benchmarks run by default by the run command o math: Float and integers o regex: Collection of regular expression benchmarks o serialize: Benchmarks on pickle and json modules o startup: Collection of microbenchmarks focused on Python interpreter start-up time. o template: Templating libraries Use the python3 -m pyperformance list_groups command to list groups and their benchmarks. Available Benchmarks In pyperformance 0.5.5, the following microbenchmarks have been removed because they are too short, not representative of real applications and are too unstable. o call_method_slots o call_method_unknown o call_method o call_simple o pybench 2to3 Run the 2to3 tool on the pyperformance/benchmarks/data/2to3/ directory: copy of the django/core/*.py files of Django 1.1.4, 9 files. Run the python -m lib2to3 -f all command where python is sys.executable. So the test does not only mesure the performance of Python itself, but also the performance of the lib2to3 module which can change depending on the Python version. NOTE: Files are called .py.txt instead of .py to not run PEP 8 checks on them, and more generally to not modify them. async_tree Async workload benchmark, which calls asyncio.gather() on a tree (6 levels deep, 6 branches per level) with the leaf nodes simulating some [potentially] async work (depending on the benchmark variant). Available variants: o async_tree: no actual async work at any leaf node. o async_tree_io: all leaf nodes simulate async IO workload (async sleep 50ms). o async_tree_memoization: all leaf nodes simulate async IO workload with 90% of the data memoized. o async_tree_cpu_io_mixed: half of the leaf nodes simulate CPU-bound workload (math.factorial(500)) and the other half simulate the same workload as the async_tree_memoization variant. These benchmarks also have an "eager" flavor that uses asyncio eager task factory, if available. chameleon Render a template using the chameleon module to create an HTML table of 500 lignes and 10 columns. See the chameleon.PageTemplate class. chaos Create chaosgame-like fractals. Command lines options: --thickness THICKNESS Thickness (default: 0.25) --width WIDTH Image width (default: 256) --height HEIGHT Image height (default: 256) --iterations ITERATIONS Number of iterations (default: 5000) --filename FILENAME.PPM Output filename of the PPM picture --rng-seed RNG_SEED Random number generator seed (default: 1234) When --filename option is used, the timing includes the time to create the PPM file. Copyright (C) 2005 Carl Friedrich Bolz [image: Chaos game, bm_chaos benchmark] [image] Image generated by bm_chaos (took 3 sec on CPython 3.5) with the command: python3 pyperformance/benchmarks/bm_chaos.py --worker -l1 -w0 -n1 --filename chaos.ppm --width=512 --height=512 --iterations 50000 crypto_pyaes benchmark a pure-Python implementation of the AES block-cipher in CTR mode using the pyaes module. The benchmark is slower on CPython 3 compared to CPython 2.7, because CPython 3 has no more "small int" type (int). The CPython 3 int type now always has an arbitrary size, as CPython 2.7 long type. See pyaes: A pure-Python implementation of the AES block cipher algorithm and the common modes of operation (CBC, CFB, CTR, ECB and OFB). deepcopy Benchmark the Python copy.deepcopy method. The deepcopy method is performed on a nested dictionary and a dataclass. deltablue DeltaBlue benchmark Ported for the PyPy project. Contributed by Daniel Lindsley This implementation of the DeltaBlue benchmark was directly ported from the V8's source code, which was in turn derived from the Smalltalk implementation by John Maloney and Mario Wolczko. The original Javascript implementation was licensed under the GPL. It's been updated in places to be more idiomatic to Python (for loops over collections, a couple magic methods, OrderedCollection being a list & things altering those collections changed to the builtin methods) but largely retains the layout & logic from the original. (Ugh.) django_template Use the Django template system to build a 150x150-cell HTML table. Use Context and Template classes of the django.template module. dulwich_log Iterate on commits of the asyncio Git repository using the Dulwich module. Use pyperformance/benchmarks/data/asyncio.git/ repository. Pseudo-code of the benchmark: repo = dulwich.repo.Repo(repo_path) head = repo.head() for entry in repo.get_walker(head): pass See the Dulwich project. docutils Use Docutils to convert Docutils' documentation to HTML. Representative of building a medium-sized documentation set. fannkuch The Computer Language Benchmarks Game: http://benchmarksgame.alioth.debian.org/ Contributed by Sokolov Yura, modified by Tupteq. float Artificial, floating point-heavy benchmark originally used by Factor. Create 100,000 point objects which compute math.cos(), math.sin() and math.sqrt() Changed in version 0.5.5: Use __slots__ on the Point class to focus the benchmark on float rather than testing performance of class attributes. genshi Render a template using Genshi (genshi.template module): o genshi_text: Render a HTML template using the NewTextTemplate class o genshi_xml: Render an XML template using the MarkupTemplate class See the Genshi project. go Artificial intelligence playing the Go board game. Use Zobrist hashing. hexiom Solver of Hexiom board game (level 25 by default). Command line option: --level {2,10,20,25,30,36} Hexiom board level (default: 25) hg_startup Get Mercurial's help screen. Measure the performance of the python path/to/hg help command using pyperf.Runner.bench_command(), where python is sys.executable and path/to/hg is the Mercurial program installed in a virtual environmnent. The bench_command() redirects stdout and stderr into /dev/null. See the Mercurial project. html5lib Parse the pyperformance/benchmarks/data/w3_tr_html5.html HTML file (132 KB) using html5lib. The file is the HTML 5 specification, but truncated to parse the file in less than 1 second (around 250 ms). On CPython, after 3 warmups, the benchmarks enters a cycle of 5 values: every 5th value is 10% slower. Plot of 1 run of 50 values (the warmup is not rendered): [image: html5lib values] [image] See the html5lib project. json_dumps, json_loads Benchmark dumps() and loads() functions of the json module. bm_json_dumps.py command line option: --cases CASES Comma separated list of cases. Available cases: EMPTY, SIMPLE, NESTED, HUGE. By default, run all cases. logging Benchmarks on the logging module: o logging_format: Benchmark logger.warn(fmt, str) o logging_simple: Benchmark logger.warn(msg) o logging_silent: Benchmark logger.debug(msg) when the log is ignored Script command line option: format silent simple See the logging module. mako Use the Mako template system to build a 150x150-cell HTML table. Includes: o two template inherences o HTML escaping, XML escaping, URL escaping, whitespace trimming o function defitions and calls o forloops See the Mako project. mdp Battle with damages and topological sorting of nodes in a graph. See Topological sorting. meteor_contest Solver for Meteor Puzzle board. Meteor Puzzle board: http://benchmarksgame.alioth.debian.org/u32/meteor-description.html#meteor The Computer Language Benchmarks Game: http://benchmarksgame.alioth.debian.org/ Contributed by Daniel Nanz, 2008-08-21. nbody N-body benchmark from the Computer Language Benchmarks Game. Microbenchmark on floating point operations. This is intended to support Unladen Swallow's perf.py. Accordingly, it has been modified from the Shootout version: o Accept standard Unladen Swallow benchmark options. o Run report_energy()/advance() in a loop. o Reimplement itertools.combinations() to work with older Python versions. Pulled from: http://benchmarksgame.alioth.debian.org/u64q/program.php?test=nbody&lang=python3&id=1 Contributed by Kevin Carson. Modified by Tupteq, Fredrik Johansson, and Daniel Nanz. python_startup, python_startup_nosite o python_startup: Measure the Python startup time, run python -c pass where python is sys.executable o python_startup_nosite: Measure the Python startup time without importing the site module, run python -S -c pass where python is sys.executable Run the benchmark with pyperf.Runner.bench_command(). nqueens Simple, brute-force N-Queens solver. See Eight queens puzzle. pathlib Test the performance of operations of the pathlib module of the standard library. This benchmark stresses the creation of small objects, globbing, and system calls. See the documentation of the pathlib module. pickle pickle benchmarks (serialize): o pickle: use the cPickle module to pickle a variety of datasets. o pickle_dict: microbenchmark; use the cPickle module to pickle a lot of dicts. o pickle_list: microbenchmark; use the cPickle module to pickle a lot of lists. o pickle_pure_python: use the pure-Python pickle module to pickle a variety of datasets. unpickle benchmarks (deserialize): o unpickle: use the cPickle module to unnpickle a variety of datasets. o unpickle_list o unpickle_pure_python: use the pure-Python pickle module to unpickle a variety of datasets. pidigits Calculating 2,000 digits of . This benchmark stresses big integer arithmetic. Command line option: --digits DIGITS Number of computed pi digits (default: 2000) Adapted from code on: http://benchmarksgame.alioth.debian.org/ pyflate Benchmark of a pure-Python bzip2 decompressor: decompress the pyperformance/benchmarks/data/interpreter.tar.bz2 file in memory. Copyright 2006--2007-01-21 Paul Sladen: http://www.paul.sladen.org/projects/compression/ You may use and distribute this code under any DFSG-compatible license (eg. BSD, GNU GPLv2). Stand-alone pure-Python DEFLATE (gzip) and bzip2 decoder/decompressor. This is probably most useful for research purposes/index building; there is certainly some room for improvement in the Huffman bit-matcher. With the as-written implementation, there was a known bug in BWT decoding to do with repeated strings. This has been worked around; see 'bwt_reverse()'. Correct output is produced in all test cases but ideally the problem would be found... raytrace Simple raytracer. Command line options: --width WIDTH Image width (default: 100) --height HEIGHT Image height (default: 100) --filename FILENAME.PPM Output filename of the PPM picture This file contains definitions for a simple raytracer. Copyright Callum and Tony Garnock-Jones, 2008. This file may be freely redistributed under the MIT license, http://www.opensource.org/licenses/mit-license.php From https://leastfixedpoint.com/tonyg/kcbbs/lshift_archive/toy-raytracer-in-python-20081029.html [image: Pure Python raytracer] [image] Image generated by the command (took 68.4 sec on CPython 3.5): python3 pyperformance/benchmarks/bm_raytrace.py --worker --filename=raytrace.ppm -l1 -w0 -n1 -v --width=800 --height=600 regex_compile Stress the performance of Python's regex compiler, rather than the regex execution speed. Benchmark how quickly Python's regex implementation can compile regexes. We bring in all the regexes used by the other regex benchmarks, capture them by stubbing out the re module, then compile those regexes repeatedly. We muck with the re module's caching to force it to recompile every regex we give it. regex_dna regex DNA benchmark using "fasta" to generate the test case. The Computer Language Benchmarks Game http://benchmarksgame.alioth.debian.org/ regex-dna Python 3 #5 program: contributed by Dominique Wahli 2to3 modified by Justin Peel fasta Python 3 #3 program: modified by Ian Osgood modified again by Heinrich Acker modified by Justin Peel Modified by Christopher Sean Forgeron regex_effbot Some of the original benchmarks used to tune mainline Python's current regex engine. regex_v8 Python port of V8's regex benchmark. Automatically generated on 2009-01-30. This benchmark is generated by loading 50 of the most popular pages on the web and logging all regexp operations performed. Each operation is given a weight that is calculated from an estimate of the popularity of the pages where it occurs and the number of times it is executed while loading each page. Finally the literal letters in the data are encoded using ROT13 in a way that does not affect how the regexps match their input. Ported to Python for Unladen Swallow. The original JS version can be found at https://github.com/v8/v8/blob/master/benchmarks/regexp.js, r1243. richards The classic Python Richards benchmark. Based on a Java version. Based on original version written in BCPL by Dr Martin Richards in 1981 at Cambridge University Computer Laboratory, England and a C++ version derived from a Smalltalk version written by L Peter Deutsch. Java version: Copyright (C) 1995 Sun Microsystems, Inc. Translation from C++, Mario Wolczko Outer loop added by Alex Jacoby scimark o scimark_sor: Successive over-relaxation (SOR) benchmark o scimark_sparse_mat_mult: sparse matrix multiplication benchmark o scimark_monte_carlo: benchmark on the Monte Carlo algorithm to compute the area of a disc o scimark_lu: LU decomposition benchmark o scimark_fft: Fast Fourier transform (FFT) benchmark spectral_norm MathWorld: "Hundred-Dollar, Hundred-Digit Challenge Problems", Challenge #3. http://mathworld.wolfram.com/Hundred-DollarHundred-DigitChallengeProblems.html The Computer Language Benchmarks Game http://benchmarksgame.alioth.debian.org/u64q/spectralnorm-description.html#spectralnorm Contributed by Sebastien Loisel. Fixed by Isaac Gouy. Sped up by Josh Goldfoot. Dirtily sped up by Simon Descarpentries. Concurrency by Jason Stitt. sqlalchemy_declarative, sqlalchemy_imperative o sqlalchemy_declarative: SQLAlchemy Declarative benchmark using SQLite o sqlalchemy_imperative: SQLAlchemy Imperative benchmark using SQLite See the SQLAlchemy project. sqlite_synth Benchmark Python aggregate for SQLite. The goal of the benchmark (written for PyPy) is to test CFFI performance and going back and forth between SQLite and Python a lot. Therefore the queries themselves are really simple. See the SQLite project and the Python sqlite3 module (stdlib). sympy Benchmark on the sympy module: o sympy_expand: Benchmark sympy.expand() o sympy_integrate: Benchmark sympy.integrate() o sympy_str: Benchmark str(sympy.expand()) o sympy_sum: Benchmark sympy.summation() On CPython, some sympy_sum values are 5%-10% slower: $ python3 -m pyperf dump sympy_sum.json Run 1: 1 warmup, 50 values, 1 loop - warmup 1: 404 ms (+63%) - value 1: 244 ms - value 2: 245 ms - value 3: 258 ms <---- - value 4: 245 ms - value 5: 245 ms - value 6: 279 ms (+12%) <---- - value 7: 246 ms - value 8: 244 ms - value 9: 245 ms - value 10: 255 ms <---- - value 11: 245 ms - value 12: 245 ms - value 13: 256 ms <---- - value 14: 248 ms - value 15: 245 ms - value 16: 245 ms ... Plot of 1 run of 50 values (the warmup is not rendered): [image: sympy_sum values] [image] See the sympy project. telco Telco Benchmark for measuring the performance of decimal calculations: o http://speleotrove.com/decimal/telco.html o http://speleotrove.com/decimal/telcoSpec.html o A call type indicator, c, is set from the bottom (least significant) bit of the duration (hence c is 0 or 1). o A rate, r, is determined from the call type. Those calls with c=0 have a low r: 0.0013; the remainder (`distance calls') have a `premium' r: 0.00894. (The rates are, very roughly, in Euros or dollarates per second.) o A price, p, for the call is then calculated (p=r*n). This is rounded to exactly 2 fractional digits using round-half-even (Banker's round to nearest). o A basic tax, b, is calculated: b=p*0.0675 (6.75%). This is truncated to exactly 2 fractional digits (round-down), and the total basic tax variable is then incremented (sumB=sumB+b). o For distance calls: a distance tax, d, is calculated: d=p*0.0341 (3.41%). This is truncated to exactly 2 fractional digits (round-down), and then the total distance tax variable is incremented (sumD=sumD+d). o The total price, t, is calculated (t=p+b, and, if a distance call, t=t+d). o The total prices variable is incremented (sumT=sumT+t). o The total price, t, is converted to a string, s. The Python benchmark is implemented with the decimal module. See the Python decimal module (stdlib). tornado_http Benchmark HTTP server of the tornado module See the Tornado project. unpack_sequence Microbenchmark for unpacking lists and tuples. Pseudo-code: a, b, c, d, e, f, g, h, i, j = to_unpack where to_unpack is tuple(range(10)) or list(range(10)). xml_etree Benchmark the ElementTree API of the xml.etree module: o xml_etree_generate: Create an XML document o xml_etree_iterparse: Benchmark etree.iterparse() o xml_etree_parse: Benchmark etree.parse() o xml_etree_process: Process an XML document See the Python xml.etree.ElementTree module (stdlib). CUSTOM BENCHMARKS pyperformance includes its own set of benchmarks (see Benchmarks). However, it also supports using custom benchmarks. Using Custom Benchmarks To use custom benchmarks, you will need to use the --manifest CLI option and provide the path to the manifest file describing those benchmarks. The pyperformance File Formats pyperformance uses two file formats to identify benchmarks: o manifest - a set of benchmarks o metadata - a single benchmark For each benchmark, there are two required files and several optional ones. Those files are expected to be in a specific directory structure (unless customized in the metadata). The structure (see below) is such that it's easy to maintain a benchmark (or set of benchmarks) on GitHub and distribute it on PyPI. It also simplifies publishing a Python project's benchmarks. The alternative is pointing people at a repo. Benchmarks can inherit metadata from other metadata files. This is useful for keeping common metadata for a set of benchmarks (e.g. "version") in one file. Likewise, benchmarks for a Python project can inherit metadata from the project's pyproject.toml. Sometimes a benchmark will have one or more variants that run using the same script. Variants like this are supported by pyperformance without requiring much extra effort. Benchmark Directory Structure Normally a benchmark is structured like this: bm_NAME/ data/ # if needed requirements.txt # lock file, if any pyproject.toml run_benchmark.py (Note the "bm_" prefix on the directory name.) "pyproject.toml" holds the metadata. "run_benchmark.py" holds the actual benchmark code. Both are necessary. pyperformance treats the metadata file as the fundamental source of information about a benchmark. A manifest for a set of benchmarks is effectively a mapping of names to metadata files. So a metadata file is essential. It can be located anywhere on disk. However, if it isn't located in the structure described above then the metadata must identify where to find the other files. Other than that, only a benchmark script (e.g. "run_benchmark.py" above) is required. All other files are optional. When a benchmark has variants, each has its own metadata file next to the normal "pyproject.toml", named "bm_NAME.toml". (Note the "bm_" prefix.) The format of variant metadata files is exactly the same. pyperformance treats them the same, except that the sibling "pyproject.toml" is inherited by default. Manifest Files A manifest file identifies a set of benchmarks, as well as (optionally) how they should be grouped. pyperformance uses the manifest to determine which benchmarks are available to run (and thus which to run by default). A manifest normally looks like this: [benchmarks] name metafile bench1 somedir/bm_bench1/pyproject.toml bench2 somedir/pyproject.toml bench3 ../anotherdir The "benchmarks" section is a table with rows of tab-separated-values. The "name" value is how pyperformance will identify the benchmark. The "metafile" value is where pyperformance will look for the benchmark's metadata. If a metafile is a directory then it looks for "pyproject.toml" in that directory. Benchmark Groups The other sections in the manifest file relate to grouping: [benchmarks] name metafile bench1 somedir/bm_bench1 bench2 somedir/bm_bench2 bench3 anotherdir/mybench.toml [groups] tag1 tag2 [group default] bench2 bench3 [group tricky] bench2 The "groups" section specifies available groups that may be identified by benchmark tags (see about tags in the metadata section below). Any other group sections in the manifest are automatically added to the list of available groups. If no "default" group is specified then one is automatically added with all benchmarks from the "benchmarks" section in it. If there is no "groups" section and no individual group sections (other than "default") then the set of all tags of the known benchmarks is treated as "groups". A group named "all" as also automatically added which has all known benchmarks in it. Benchmarks can be excluded from a group by using a - (minus) prefix. Any benchmark alraedy in the list (at that point) that matches will be dropped from the list. If the first entry in the section is an exclusion then all known benchmarks are first added to the list before the exclusion is applied. For example: [benchmarks] name metafile bench1 somedir/bm_bench1 bench2 somedir/bm_bench2 bench3 anotherdir/mybench.toml [group default] -bench1 This means by default only "bench2" and "bench3" are run. Merging Manifests To combine manifests, use the [includes] section in the manifest: [includes] project1/benchmarks/MANIFEST project2/benchmarks/MANIFEST Note that is the same as including the manifest file for the default pyperformance benchmarks. A Local Benchmark Suite Often a project will have more than one benchmark that it will treat as a suite. pyperformance handles this without any extra work. In the dirctory holding the manifest file put all the benchmarks. Then put in the "metafile" column, like this: [benchmarks] name metafile bench1 bench2 bench3 bench4 bench5 It will look for DIR/bm_NAME/pyproject.toml. If there are also variants, identify the main benchmark in the "metafile" value, like this: [benchmarks] name metafile bench1 bench2 bench3 variant1 variant2 pyperformance will look for DIR/bm_BASE/bm_NAME.toml, where "BASE" is the part after "local:". A Project's Benchmark Suite A Python project can identify its benchmark suite by putting the path to the manifest file in the project's top-level pyproject.toml. Additional manifests can be identified as well: [tool.pyperformance] manifest = "..." manifests = ["...", "..."] (Reminder: that is the pyproject.toml, not the manifest file.) Benchmark Metadata Files A benchmark's metadata file (usually pyproject.toml) follows the format specified in PEP 621 and PEP 518. So there are two supported sections in the file: "project" and "tool.pyperformance". A typical metadata file will look something like this: [project] version = "0.9.1" dependencies = ["pyperf"] dynamic = ["name"] [tool.pyperformance] name = "my_benchmark" A highly detailed one might look like this: [project] name = "pyperformance_bm_json_dumps" version = "0.9.1" description = "A benchmark for json.dumps()" requires-python = ">=3.8" dependencies = ["pyperf"] urls = {repository = "https://github.com/python/pyperformance"} dynamic = ["version"] [tool.pyperformance] name = "json_dumps" tags = "serialize" runscript = "bench.py" datadir = ".data-files/extras" extra_opts = ["--special"] Inheritance For one benchmark to inherit from another (or from common metadata), the "inherits" field is available: [project] dependencies = ["pyperf"] dynamic = ["name", "version"] [tool.pyperformance] name = "my_benchmark" inherits = "../common.toml" All values in either section of the inherited metadata are treated as defaults, on top of which the current metadata is applied. In the above example, for instance, a value for "version" in common.toml would be used here. If the "inherits" value is a directory (even for "..") then "base.toml" in that directory will be inherited. For variants, the base pyproject.toml is the default value for "inherits". Inferred Values In some situations, omitted values will be inferred from other available data (even for required fields). o project.name <= tool.pyperformance.name o project.* <= inherited metadata (except for "name" and "dynamic") o tool.pyperformance.name <= metadata filename o tool.pyperformance.* <= inherited metadata (except for "name" and "inherits") When the name is inferred from the filename for a regularly structured benchmark, the "bm_" prefix is removed from the benchmark's directory. If it is a variant that prefix is removed from the metadata filename, as well as the .toml suffix. The [project] Section +---------------------+-------+---+---+---+---+ |field | type | R | T | B | D | +---------------------+-------+---+---+---+---+ |project.name | str | X | X | | | +---------------------+-------+---+---+---+---+ |project.version | ver | X | | X | X | +---------------------+-------+---+---+---+---+ |project.dependencies | [str] | | | X | | +---------------------+-------+---+---+---+---+ |project.dynamic | [str] | | | | | +---------------------+-------+---+---+---+---+ "R": required "T": inferred from the tool section "B": inferred from the inherited metadata "D": for default benchmarks, inferred from pyperformance "dynamic" is required by PEP 621 for when a field will be filled in dynamically by the tool. This is especially important for required fields. All other PEP 621 fields are optional (e.g. requires-python = ">=3.8", {repository = "https://github.com/..."}). The [tool.pyperformance] Section +----------------+-------+---+---+---+ |field | type | R | B | F | +----------------+-------+---+---+---+ |tool.name | str | X | | X | +----------------+-------+---+---+---+ |tool.tags | [str] | | X | | +----------------+-------+---+---+---+ |tool.extra_opts | [str] | | X | | +----------------+-------+---+---+---+ |tool.inherits | file | | | | +----------------+-------+---+---+---+ |tool.runscript | file | | X | | +----------------+-------+---+---+---+ |tool.datadir | file | | X | | +----------------+-------+---+---+---+ "R": required "B": inferred from the inherited metadata "F": inferred from filename o tags: optional list of names to group benchmarks o extra_opts: optional list of args to pass to tool.runscript o runscript: the benchmark script to use instead of run_benchmark.py. CPYTHON RESULTS, 2017 This page lists benchmarks which became faster in CPython. Optimizations 2016-12-14: speedup method calls Optimization: Speedup method calls 1.2x, commit f2392133. +--------------------+----------------+----------------+ |Benchmark | 2016-12-01 | 2017-01-01 | | | (27580c1fb5e8) | (67e1aa0b58be) | +--------------------+----------------+----------------+ |call_method | 14.1 ms | 11.2 ms: 1.26x | | | | faster (-21%) | +--------------------+----------------+----------------+ |call_method_slots | 13.9 ms | 11.1 ms: 1.25x | | | | faster (-20%) | +--------------------+----------------+----------------+ |call_method_unknown | 16.0 ms | 14.3 ms: 1.12x | | | | faster (-11%) | +--------------------+----------------+----------------+ 2016-04-22: pymalloc allocator Optimization: PyMem_Malloc() now uses the fast pymalloc allocator, commit f5c4b990. Changes of at least 5%: +---------------------+----------------+-----------------+ |Benchmark | 2016-04-21 | 2016-04-22 | | | (5439fc4901db) | (f5c4b99034fa) | +---------------------+----------------+-----------------+ |unpickle_list | 10.4 us | 7.64 us: 1.36x | | | | faster (-27%) | +---------------------+----------------+-----------------+ |json_dumps | 28.0 ms | 25.2 ms: 1.11x | | | | faster (-10%) | +---------------------+----------------+-----------------+ |unpickle_pure_python | 741 us | 678 us: 1.09x | | | | faster (-9%) | +---------------------+----------------+-----------------+ |unpickle | 33.9 us | 31.3 us: 1.08x | | | | faster (-8%) | +---------------------+----------------+-----------------+ |meteor_contest | 197 ms | 183 ms: 1.08x | | | | faster (-7%) | +---------------------+----------------+-----------------+ |mako | 36.9 ms | 34.3 ms: 1.07x | | | | faster (-7%) | +---------------------+----------------+-----------------+ |pathlib | 41.0 ms | 38.4 ms: 1.07x | | | | faster (-6%) | +---------------------+----------------+-----------------+ |call_method_slots | 14.8 ms | 13.9 ms: 1.07x | | | | faster (-6%) | +---------------------+----------------+-----------------+ |telco | 19.5 ms | 18.3 ms: 1.07x | | | | faster (-6%) | +---------------------+----------------+-----------------+ |scimark_lu | 413 ms | 388 ms: 1.07x | | | | faster (-6%) | +---------------------+----------------+-----------------+ |nqueens | 221 ms | 207 ms: 1.07x | | | | faster (-6%) | +---------------------+----------------+-----------------+ |fannkuch | 937 ms | 882 ms: 1.06x | | | | faster (-6%) | +---------------------+----------------+-----------------+ |regex_compile | 319 ms | 301 ms: 1.06x | | | | faster (-6%) | +---------------------+----------------+-----------------+ |raytrace | 1.16 sec | 1.09 sec: 1.06x | | | | faster (-5%) | +---------------------+----------------+-----------------+ |pickle_pure_python | 1.11 ms | 1.05 ms: 1.05x | | | | faster (-5%) | +---------------------+----------------+-----------------+ |genshi_text | 70.1 ms | 66.6 ms: 1.05x | | | | faster (-5%) | +---------------------+----------------+-----------------+ 2015-12-07: Optimize ElementTree.iterparse(), xml_etree_iterparse Optimization: Issue #25638: Optimized ElementTree.iterparse(); it is now 2x faster, commit 9ec5e25f2. +--------------------+----------------+----------------+ |Benchmark | 2015-12-01 | 2016-01-01 | | | (df144092a340) | (71db90356390) | +--------------------+----------------+----------------+ |xml_etree_iterparse | 423 ms | 206 ms: 2.05x | | | | faster (-51%) | +--------------------+----------------+----------------+ 2015-09-19: PGO uses test suite, pidigits Optimization: Issue #24915: Add Clang support to PGO builds and use the test suite for profile data, commit 7188a3ef. Changes of at least 5%: +--------------+----------------+------------------+ |Benchmark | 2015-09-18 | 2015-09-18_22-13 | | | (4b363e270108) | (7188a3efe07b) | +--------------+----------------+------------------+ |pickle | 33.7 us | 26.4 us: 1.28x | | | | faster (-22%) | +--------------+----------------+------------------+ |pidigits | 332 ms | 286 ms: 1.16x | | | | faster (-14%) | +--------------+----------------+------------------+ |pickle_list | 9.90 us | 8.84 us: 1.12x | | | | faster (-11%) | +--------------+----------------+------------------+ |unpickle | 37.2 us | 33.3 us: 1.12x | | | | faster (-11%) | +--------------+----------------+------------------+ |unpickle_list | 11.1 us | 9.95 us: 1.11x | | | | faster (-10%) | +--------------+----------------+------------------+ |regex_dna | 330 ms | 297 ms: 1.11x | | | | faster (-10%) | +--------------+----------------+------------------+ |regex_effbot | 6.43 ms | 5.80 ms: 1.11x | | | | faster (-10%) | +--------------+----------------+------------------+ |pickle_dict | 69.3 us | 64.1 us: 1.08x | | | | faster (-8%) | +--------------+----------------+------------------+ |mako | 39.1 ms | 36.2 ms: 1.08x | | | | faster (-7%) | +--------------+----------------+------------------+ |call_simple | 12.2 ms | 11.6 ms: 1.05x | | | | faster (-5%) | +--------------+----------------+------------------+ |genshi_xml | 175 ms | 166 ms: 1.05x | | | | faster (-5%) | +--------------+----------------+------------------+ Changes of at least 5%, sadly two benchmarks also became slower: +---------------------+--------------------------------------+--------------------------------------+ |Benchmark | 2015-09-18_14-32-master-4b363e270108 | 2015-09-18_22-13-master-7188a3efe07b | +---------------------+--------------------------------------+--------------------------------------+ |unpickle_pure_python | 776 us | 821 us: 1.06x slower (+6%) | +---------------------+--------------------------------------+--------------------------------------+ |regex_v8 | 49.5 ms | 52.6 ms: 1.06x slower (+6%) | +---------------------+--------------------------------------+--------------------------------------+ 2015-05-30: C implementation of collections.OrderedDict, html5lib Optimization: Issue #16991: Add a C implementation of collections.OrderedDict, commit 96c6af9b. +----------+----------------+----------------+ |Benchmark | 2015-05-02 | 2015-06-01 | | | (3b4d30a27bd6) | (41874c570cf3) | +----------+----------------+----------------+ |html5lib | 285 ms | 233 ms: 1.23x | | | | faster (-19%) | +----------+----------------+----------------+ 2015-05-23: C implementation of functools.lru_cache(), sympy Optimization: Issue #14373: Added C implementation of functools.lru_cache(), commit 1c858c35. Changes of at least 5%: +----------------+--------------------------------------+--------------------------------------+ |Benchmark | 2015-05-23_19-15-master-c70908558d8e | 2015-05-23_19-42-master-1c858c352b8c | +----------------+--------------------------------------+--------------------------------------+ |sympy_expand | 1.45 sec | 1.14 sec: 1.27x faster (-21%) | +----------------+--------------------------------------+--------------------------------------+ |sympy_sum | 308 ms | 247 ms: 1.25x faster (-20%) | +----------------+--------------------------------------+--------------------------------------+ |sympy_str | 621 ms | 500 ms: 1.24x faster (-19%) | +----------------+--------------------------------------+--------------------------------------+ |sympy_integrate | 54.2 ms | 45.7 ms: 1.19x faster (-16%) | +----------------+--------------------------------------+--------------------------------------+ |scimark_lu | 497 ms | 471 ms: 1.06x faster (-5%) | +----------------+--------------------------------------+--------------------------------------+ pickle_dict is seen as 1.06x slower, but since pickle doesn't use functools.lru_cache(), the change is ignored in the table. Slowdown 2016-09-11: regex_compile Slowdown: convert re flags to (much friendlier) IntFlag constants (issue #28082), commit f93395bc. +--------------+----------------+----------------+----------------+ |Benchmark | 2016-04-01 | 2016-07-01 | 2016-10-01 | | | (6b6abd4cf10e) | (355048970b2a) | (78a111c7d867) | +--------------+----------------+----------------+----------------+ |regex_compile | 339 ms | 309 ms: 1.10x | 383 ms: 1.13x | | | | faster (-9%) | slower (+13%) | +--------------+----------------+----------------+----------------+ Timeline April, 2016 -> May, 2016 2016-04-01 .. 2016-05-01: +----------+----------------+-----------------+ |Benchmark | 2016-04-01 | 2016-05-01 | | | (dcfebb32e277) | (f1e2671fdf88) | +----------+----------------+-----------------+ |nqueens | 255 ms | 207 ms: 1.23x | | | | faster (-19%) | +----------+----------------+-----------------+ |raytrace | 1.31 sec | 1.09 sec: 1.19x | | | | faster (-16%) | +----------+----------------+-----------------+ |float | 290 ms | 243 ms: 1.19x | | | | faster (-16%) | +----------+----------------+-----------------+ |chaos | 273 ms | 235 ms: 1.16x | | | | faster (-14%) | +----------+----------------+-----------------+ |hexiom | 21.0 ms | 18.6 ms: 1.13x | | | | faster (-11%) | +----------+----------------+-----------------+ |deltablue | 16.4 ms | 14.6 ms: 1.12x | | | | faster (-11%) | +----------+----------------+-----------------+ |go | 557 ms | 502 ms: 1.11x | | | | faster (-10%) | +----------+----------------+-----------------+ |nbody | 254 ms | 232 ms: 1.10x | | | | faster (-9%) | +----------+----------------+-----------------+ call_method Timeline 2016-04-01 .. 2017-01-01: +--------------------+----------------+----------------+----------------+----------------+ |Benchmark | 2016-04-01 | 2016-07-01 | 2016-10-01 | 2017-01-01 | | | (6b6abd4cf10e) | (355048970b2a) | (78a111c7d867) | (67e1aa0b58be) | +--------------------+----------------+----------------+----------------+----------------+ |call_method | 15.8 ms | 14.9 ms: 1.06x | 14.1 ms: 1.13x | 11.2 ms: 1.42x | | | | faster (-6%) | faster (-11%) | faster (-29%) | +--------------------+----------------+----------------+----------------+----------------+ |call_method_slots | 15.7 ms | 15.2 ms: 1.03x | 14.0 ms: 1.13x | 11.1 ms: 1.42x | | | | faster (-3%) | faster (-11%) | faster (-29%) | +--------------------+----------------+----------------+----------------+----------------+ |call_method_unknown | 17.7 ms | 15.9 ms: 1.11x | 15.6 ms: 1.13x | 14.3 ms: 1.23x | | | | faster (-10%) | faster (-11%) | faster (-19%) | +--------------------+----------------+----------------+----------------+----------------+ crypto_pyaes +-------------+---------------------+---------------------+ |Benchmark | 2016-04-01 (master) | 2016-05-01 (master) | +-------------+---------------------+---------------------+ |crypto_pyaes | 226 ms | 205 ms: 1.10x | | | | faster (-9%) | +-------------+---------------------+---------------------+ 2016-03-01 .. 2016-06-01: +-------------+----------------+----------------+ |Benchmark | 2016-03-01 | 2016-06-01 | | | (13d09afff127) | (d80ab7d94578) | +-------------+----------------+----------------+ |crypto_pyaes | 231 ms | 199 ms: 1.16x | | | | faster (-14%) | +-------------+----------------+----------------+ json_loads Progress on 21 months, 2015-01-01 .. 2016-10-01: +-----------+----------------+----------------+ |Benchmark | 2015-01-01 | 2016-10-01 | | | (52074ac866eb) | (78a111c7d867) | +-----------+----------------+----------------+ |json_loads | 64.0 us | 56.6 us: 1.13x | | | | faster (-11%) | +-----------+----------------+----------------+ logging_silent +---------------+----------------+----------------+ |Benchmark | 2016-01-01 | 2016-07-01 | | | (899b72cee21c) | (355048970b2a) | +---------------+----------------+----------------+ |logging_silent | 718 ns | 606 ns: 1.18x | | | | faster (-16%) | +---------------+----------------+----------------+ pickle pickle, 2016-08-02 .. 2016-09-08: +----------+----------------+----------------+ |Benchmark | 2016-08-02 | 2016-09-08 | | | (133138a284be) | (10427f44852b) | +----------+----------------+----------------+ |pickle | 25.5 us | 21.4 us: 1.19x | | | | faster (-16%) | +----------+----------------+----------------+ pickle dict/list: +------------+----------------+----------------+ |Benchmark | 2016-04-01 | 2016-10-01 | | | (6b6abd4cf10e) | (78a111c7d867) | +------------+----------------+----------------+ |pickle_dict | 64.5 us | 57.7 us: 1.12x | | | | faster (-11%) | +------------+----------------+----------------+ |pickle_list | 9.06 us | 7.79 us: 1.16x | | | | faster (-14%) | +------------+----------------+----------------+ unpickle: +----------+----------------+----------------+ |Benchmark | 2015-07-01 | 2015-10-01 | | | (d7982beca93c) | (30b7138fe12b) | +----------+----------------+----------------+ |unpickle | 36.9 us | 32.8 us: 1.13x | | | | faster (-11%) | +----------+----------------+----------------+ python_startup 2015-04-01 .. 2015-10-01: +-----------------------+----------------+----------------+ |Benchmark | 2015-04-01 | 2015-10-01 | | | (4fd929b43121) | (30b7138fe12b) | +-----------------------+----------------+----------------+ |python_startup | 16.4 ms | 17.2 ms: 1.05x | | | | slower (+5%) | +-----------------------+----------------+----------------+ |python_startup_no_site | 8.65 ms | 8.90 ms: 1.03x | | | | slower (+3%) | +-----------------------+----------------+----------------+ 2016-04-01 .. 2017-01-01: +-----------------------+----------------+----------------+ |Benchmark | 2016-04-01 | 2017-01-01 | | | (6b6abd4cf10e) | (67e1aa0b58be) | +-----------------------+----------------+----------------+ |python_startup | 17.3 ms | 14.5 ms: 1.20x | | | | faster (-16%) | +-----------------------+----------------+----------------+ |python_startup_no_site | 8.89 ms | 8.39 ms: 1.06x | | | | faster (-6%) | +-----------------------+----------------+----------------+ regex_compile +--------------+----------------+----------------+----------------+ |Benchmark | 2016-04-01 | 2016-07-01 | 2016-10-01 | | | (6b6abd4cf10e) | (355048970b2a) | (78a111c7d867) | +--------------+----------------+----------------+----------------+ |regex_compile | 339 ms | 309 ms: 1.10x | 383 ms: 1.13x | | | | faster (-9%) | slower (+13%) | +--------------+----------------+----------------+----------------+ telco +----------+----------------+----------------+----------------+----------------+----------------+ |Benchmark | 2016-01-01 | 2016-04-01 | 2016-07-01 | 2016-10-01 | 2017-03-31 | | | (899b72cee21c) | (6b6abd4cf10e) | (355048970b2a) | (78a111c7d867) | (cdcac039fb44) | +----------+----------------+----------------+----------------+----------------+----------------+ |telco | 19.6 ms | 19.2 ms: 1.02x | 18.3 ms: 1.08x | 15.1 ms: 1.30x | 13.9 ms: 1.41x | | | | faster (-2%) | faster (-7%) | faster (-23%) | faster (-29%) | +----------+----------------+----------------+----------------+----------------+----------------+ scimark 2016-10-01 .. 2017-03-31: +------------+----------------+----------------+----------------+ |Benchmark | 2016-10-01 | 2017-01-01 | 2017-03-31 | | | (78a111c7d867) | (67e1aa0b58be) | (cdcac039fb44) | +------------+----------------+----------------+----------------+ |scimark_lu | 423 ms | 378 ms: 1.12x | 318 ms: 1.33x | | | | faster (-11%) | faster (-25%) | +------------+----------------+----------------+----------------+ |scimark_sor | 426 ms | 403 ms: 1.06x | 375 ms: 1.14x | | | | faster (-5%) | faster (-12%) | +------------+----------------+----------------+----------------+ sqlalchemy_declarative +-----------------------+----------------+----------------+ |Benchmark | 2014-10-01 | 2015-10-01 | | | (5a789f7eaf81) | (30b7138fe12b) | +-----------------------+----------------+----------------+ |sqlalchemy_declarative | 345 ms | 301 ms: 1.15x | | | | faster (-13%) | +-----------------------+----------------+----------------+ sympy 2016-04-01 .. 2016-10-01: +----------------+----------------+-----------------+----------------+ |Benchmark | 2016-04-01 | 2016-07-01 | 2016-10-01 | | | (6b6abd4cf10e) | (355048970b2a) | (78a111c7d867) | +----------------+----------------+-----------------+----------------+ |sympy_expand | 1.10 sec | 1.01 sec: 1.09x | 942 ms: 1.17x | | | | faster (-8%) | faster (-14%) | +----------------+----------------+-----------------+----------------+ |sympy_integrate | 46.6 ms | 42.9 ms: 1.09x | 41.2 ms: 1.13x | | | | faster (-8%) | faster (-11%) | +----------------+----------------+-----------------+----------------+ |sympy_sum | 247 ms | 233 ms: 1.06x | 199 ms: 1.24x | | | | faster (-6%) | faster (-19%) | +----------------+----------------+-----------------+----------------+ |sympy_str | 483 ms | 454 ms: 1.07x | 427 ms: 1.13x | | | | faster (-6%) | faster (-12%) | +----------------+----------------+-----------------+----------------+ xml_etree_generate +-------------------+----------------+----------------+----------------+----------------+----------------+ |Benchmark | 2015-04-01 | 2015-07-01 | 2015-10-01 | 2016-01-01 | 2016-07-01 | | | (4fd929b43121) | (d7982beca93c) | (30b7138fe12b) | (899b72cee21c) | (355048970b2a) | +-------------------+----------------+----------------+----------------+----------------+----------------+ |xml_etree_generate | 282 ms | 267 ms: 1.06x | 256 ms: 1.10x | 237 ms: 1.19x | 212 ms: 1.33x | | | | faster (-5%) | faster (-9%) | faster (-16%) | faster (-25%) | +-------------------+----------------+----------------+----------------+----------------+----------------+ CHANGELOG Version 1.11.0 (2024-03-09) o Add a --same-loops option to the run command to use the exact same number of loops as a previous run (without recalibrating). o Bump pyperf to 2.6.3 o Fix the django_template benchmark for compatibilty with 3.13 o Fix benchmark.conf.sample Version 1.10.0 (2023-10-22) o Add benchmark for asyncio_webockets o Expose --min-time from pyperf to pyperformance CLI o Bump coverage to 7.3.2 for compatibilty with 3.13 o Bump greenlet to 3.0.0rc3 for compatibilty with 3.13 Version 1.0.9 (2023-06-14) o Vendor lib2to3 for Python 3.13+ o Add TaskGroups variants to async_tree benchmarks Version 1.0.8 (2023-06-02) o Move the main requirements.txt file to pyperformance/requirements so that dependabot can only run on that one file o Update dependencies of benchmarks not to specify setuptools o On older versions of Python, skip benchmarks that use features introduced in newer Python versions o Support --inherit-environ when reusing a venv o Use tomllib/tomli over toml o Update MANIFEST.in to include cert files for asyncio_tcp_ssl benchmark o Fix undefined variable issue when raising VenvPipInstallFailedError o Add mypy config; run mypy in CI o Fix typo of str.partition from _pyproject_toml.py o Add version of Richards benchmark that uses super() o Add a benchmark for runtime-checkable protocols o Extend async tree benchmarks to cover eager task execution Version 1.0.7 (2023-04-22) o Upgrade pyperf from 2.5.0 to 2.6.0 o Clean unused imports and other small code details o Migrage to the pyproject.toml based project o Fix the django_template benchmark due to lack of distutils o Add benchmark for toml o Add benchmark for comprehensions o Add benchmark for asyncio_tcp_ssl o Add benchmark for asyncio_tcp o Add benchmark for Dask scheduler o Add the gc benchmarks to the MANIFEST file Version 1.0.6 (2022-11-20) o Upgrade pyperf from 2.4.1 to 2.5.0 o Add a benchmark to measure gc traversal o Add jobs field in compile section to specify make -j param o Add benchmark for Docutils o Add async_generators benchmark o Add benchmark for IPC o Fix Manifest Group o Fix installing dev build of pyperformance inside compile/compile_all o Always upload, even when some benchmarks fail o Add sqlglot benchmarks o Support reporting geometric mean by tags o Allow for specifying local wheels and sdists as dependencies o Add a benchmark based on python -m pprint o Add mdp back into the default group o Add coroutines benchmark o Reduce noise in generators benchmark o Add benchmark for deepcopy o Add coverage benchmark o Add generators benchmark o Add benchmark for async tree workloads o Support relative paths to manifest files o Add support for multiple benchmark groups in a manifest o Fix --inherit-environ issue o Use working Genshi 0.7.7 Version 1.0.4 (2022-01-25) o Re-release support for user-defined benchmark after fixing problem with virtual environments. Version 1.0.3 (2021-12-20) o Support user-defined benchmark suites. Version 1.0.2 (2021-05-11) o Disable the henshi benchmark temporarily since is no longer compatible with Python 3.11. o Reenable html5lib benchmark: html5lib 1.1 has been released. o Update requirements. o Replace Travis CI with GitHub Actions. o The development branch master was renamed to main. See https://sfconservancy.org/news/2020/jun/23/gitbranchname/ for the rationale. Version 1.0.1 (2020-03-26) o Drop usage of the six module since Python 2 is no longer supported. Remove Python 2 specific code. o Update dependencies: o django: 3.0 => 3.0.4 o dulwich: 0.19.14 => 0.19.15 o mako: 1.1.0 = > 1.1.2 o mercurial: 5.1.1 => 5.3.1 o psutil: 5.6.7 => 5.7.0 o pyperf: 1.7.0 => 2.0.0 o sqlalchemy: 1.3.12 => 1.3.15 o sympy: 1.5 => 1.5.1 o tornado: 6.0.3 => 6.0.4 o Remove six, html5lib and mercurial requirements. o pip-tools (pip-compile) is now used to update dependencies Version 1.0.0 (2019-12-17) o Enable pyflate benchmarks on Python 3. o Remove spambayes benchmark: it is not compatible with Python 3. o Remove 2n3:benchmark group. o Drop Python 2.7 support: old Django and Tornado versions are not compatible with incoming Python 3.9. o Disable html5lib benchmark temporarily, since it's no longer compatible with Python 3.9. o Update requirements: o Django: 1.11.22 => 3.0 o Mako: 1.0.14 => 1.1.0 o SQLAlchemy: 1.3.6 => 1.3.12 o certifi: 2019.6.16 => 2019.11.28 o docutils: 0.15.1 => 0.15.2 o dulwich: 0.19.11 => 0.19.14 o mercurial: 5.0.2 => 5.1.1 o psutil: 5.6. => 5.6.7 o pyperf: 1.6.1 => 1.7.0 o six: 1.12. => 1.13.0 o sympy: 1.4 => 1.5 Version 0.9.1 (2019-07-29) o Enable hg_startup on Python 3 o Fix compatibility with Python 3.8 beta 2 o Update requirements: o certifi: 2019.3.9 => 2019.6.16 o Chameleon: 3.6.1 => 3.6.2 o Django: 1.11.20 => 1.11.22 o docutils: 0.14 => 0.15.1.post1 o Mako: 1.0.10 => 1.0.14 o mercurial: 5.0 => 5.0.2 o pathlib2: 2.3.3 => 2.3.4 o psutil: 5.6.2 => 5.6.3 o SQLAlchemy: 1.3.4 => 1.3.6 Version 0.9.0 (2019-05-29) o Project renamed from "performance" to "pyperformance" o Upgrade pyperf from version 1.6.0 to 1.6.1. The project has been renamed from "perf" to "pyperf". Update imports. o Issue #54: Update Genshi to 0.7.3. It is now compatible with Python 3.8. o Update requirements: o Mako: 1.0.9= > 1.0.10 o SQLAlchemy: 1.3.3 => 1.3.4 Version 0.8.0 (2019-05-10) o compile command: Add "pkg_only" option to benchmark.conf. Add support for native libraries that are installed but not on path. Patch by Robert Grimm. o Update Travis configuration: use trusty image, use pip cache. Patch by Inada Naoki. o Upgrade tornado to 5.1.1. Patch by Inada Naoki. o Fix compile command on Mac OS: no program extension. Patch by Anthony Shaw. o Update requirements: o Chameleon: 3.4 => 3.6.1 o Django: 1.11.16 => 1.11.20 o Genshi: 0.7.1 => 0.7.2 o Mako: 1.0.7 => 1.0.9 o MarkupSafe: 1.0 => 1.1.1 o SQLAlchemy: 1.2.12 => 1.3.3 o certifi: 2018.10.15 => 2019.3.9 o dulwich: 0.19.6 => 0.19.11 o mercurial: 4.7.2 => 5.0 o mpmath: 1.0.0 => 1.1.0 o pathlib2: 2.3.2 => 2.3.3 o perf: 1.5.1 => 1.6.0 o psutil: 5.4.7 => 5.6.2 o six: 1.11.0 => 1.12.0 o sympy: 1.3 => 1.4 o tornado: 4.5.3 => 5.1.1 Version 0.7.0 (2018-10-16) o python_startup: Add --exit option. o Update requirements: o certifi: 2017.11.5 => 2018.10.15 o Chameleon: 3.2 => 3.4 o Django: 1.11.9 => 1.11.16 o dulwich: 0.18.6 => 0.19.6 o Genshi: 0.7 => 0.7.1 o mercurial: 4.4.2 => 4.7.2 o pathlib2: 2.3.0 => 2.3.2 o psutil: 5.4.3 => 5.4.7 o SQLAlchemy: 1.2.0 => 1.2.12 o sympy: 1.1.1 => 1.3 o Fix issue #40 for pip 10 and newer: Remove indirect dependencies. Indirect dependencies were used to install cffi, but Mercurial 4.0 doesn't depend on cffi anymore. Version 0.6.1 (2018-01-11) o Fix inherit-environ: propagate to recursive invocations of performance in compile and compile_all commands. o Fix the --track-memory option thanks to the update to perf 1.5. o Update requirements o certifi: 2017.4.17 => 2017.11.5 o Chameleon: 3.1 => 3.2 o Django: 1.11.3 => 1.11.9 o docutils: 0.13.1 => 0.14 o dulwich: 0.17.3 => 0.18.6 o html5lib: 0.999999999 => 1.0.1 o Mako: 1.0.6 => 1.0.7 o mercurial: 4.2.2 => 4.4.2 o mpmath: 0.19 => 1.0.0 o perf: 1.4 => 1.5.1 (fix --track-memory option) o psutil: 5.2.2 => 5.4.3 o pyaes: 1.6.0 => 1.6.1 o six: 1.10.0 => 1.11.0 o SQLAlchemy: 1.1.11 => 1.2.0 o sympy: 1.0 => 1.1.1 o tornado: 4.5.1 => 4.5.3 Version 0.6.0 (2017-07-06) o Change warn to warning in bm_logging.py. In Python 3, Logger.warn() calls warnings.warn() to log a deprecation warning, so is slower than Logger.warning(). o Add again the logging_silent microbenchmark suite. o compile command: update the Git repository before getting the revision o Update requirements o perf: 1.3 => 1.4 (fix parse_cpu_list(): strip also NUL characters) o Django: 1.11.1 => 1.11.3 o mercurial: 4.2 => 4.2.2 o pathlib2: 2.2.1 => 2.3.0 o SQLAlchemy: 1.1.10 => 1.1.11 Version 0.5.5 (2017-05-29) o On the 2.x branch on CPython, compile now pass --enable-unicode=ucs4 to the configure script on all platforms, except on Windows which uses UTF-16 because of its 16-bit wchar_t. o The float benchmark now uses __slots__ on the Point class. o Remove the following microbenchmarks. They have been moved to the pymicrobench project because they are too short, not representative of real applications and are too unstable. o pybench microbenchmark suite o call_simple o call_method o call_method_unknown o call_method_slots o logging_silent: values are faster than 1 ns on PyPy with 2^27 loops! (and around 0.7 us on CPython) o Update requirements o Django: 1.11 => 1.11.1 o SQLAlchemy: 1.1.9 => 1.1.10 o certifi: 2017.1.23 => 2017.4.17 o perf: 1.2 => 1.3 o mercurial: 4.1.2 => 4.2 o tornado: 4.4.3 => 4.5.1 Version 0.5.4 (2017-04-10) o Create a new documentation at: http://pyperformance.readthedocs.io/ o Add "CPython results, 2017" to the doc: significant performance changes, significant optimizations, timeline, etc. o The show command doesn't need to create a virtual env anymore. o Add new commands: o pyperformance compile: compile, install and benchmark o pyperformance compile_all: benchmark multiple branches and revisions of Python o pyperformance upload: upload a JSON file to a Codespeed o setup.py: add dependencies to perf and six modules. o bm_xml_etree now uses "_pure_python" in benchmark names if the accelerator is explicitly disabled. o Upgrade requirements: o Django: 1.10.6 -> 1.11 o SQLAlchemy: 1.1.6 -> 1.1.9 o mercurial: 4.1.1 -> 4.1.2 o perf: 1.1 => 1.2 o psutil: 5.2.1 -> 5.2.2 o tornado: 4.4.2 -> 4.4.3 o webencodings: 0.5 -> 0.5.1 o perf 1.2 now calibrates the number of warmups on PyPy. o On Python 3.5a0: force pip 7.1.2 and setuptools 18.5: https://sourceforge.net/p/pyparsing/bugs/100/ Version 0.5.3 (2017-03-27) o Upgrade Dulwich to 0.17.3 to support PyPy older than 5.6: see https://github.com/jelmer/dulwich/issues/509 o Fix ResourceWarning warnings: close explicitly files and sockets. o scripts: replace Mercurial commands with Git commands. o Upgrade requirements: o dulwich: 0.17.1 => 0.17.3 o perf: 1.0 => 1.1 o psutil: 5.2.0 => 5.2.1 Version 0.5.2 (2017-03-17) o Upgrade requirements: o certifi: 2016.9.26 => 2017.1.23 o Chameleon: 3.0 => 3.1 o Django: 1.10.5 => 1.10.6 o MarkupSafe: 0.23 => 1.0 o dulwich: 0.16.3 => 0.17.1 o mercurial: 4.0.2 => 4.1.1 o pathlib2: 2.2.0 => 2.2.1 o perf: 0.9.3 => 1.0 o psutil: 5.0.1 => 5.2.0 o SQLAlchemy: 1.1.4 => 1.1.6 Version 0.5.1 (2017-01-16) o Fix Windows support (upgrade perf from 0.9.0 to 0.9.3) o Upgrade requirements: o Chameleon: 2.25 => 3.0 o Django: 1.10.3 => 1.10.5 o docutils: 0.12 => 0.13.1 o dulwich: 0.15.0 => 0.16.3 o mercurial: 4.0.0 => 4.0.2 o perf: 0.9.0 => 0.9.3 o psutil: 5.0.0 => 5.0.1 Version 0.5.0 (2016-11-16) o Add mdp benchmark: battle with damages and topological sorting of nodes in a graph o The default benchmark group now include all benchmarks but pybench o If a benchmark fails, log an error, continue to execute following benchmarks, but exit with error code 1. o Remove deprecated benchmarks: threading_threaded_count and threading_iterative_count. It wasn't possible to run them anyway. o dulwich requirement is now optional since its installation fails on Windows. o Upgrade requirements: o Mako: 1.0.5 => 1.0.6 o Mercurial: 3.9.2 => 4.0.0 o SQLAlchemy: 1.1.3 => 1.1.4 o backports-abc: 0.4 => 0.5 Version 0.4.0 (2016-11-07) o Add sqlalchemy_imperative benchmark: it wasn't registered properly o The list command now only lists the benchmark that the run command will run. The list command gets a new -b/--benchmarks option. o Rewrite the code creating the virtual environment to test correctly pip. Download and run get-pip.py if pip installation failed. o Upgrade requirements: o perf: 0.8.2 => 0.9.0 o Django: 1.10.2 => 1.10.3 o Mako: 1.0.4 => 1.0.5 o psutil: 4.3.1 => 5.0.0 o SQLAlchemy: 1.1.2 => 1.1.3 o Remove virtualenv dependency Version 0.3.2 (2016-10-19) o Fix setup.py: include also performance/benchmarks/data/asyncio.git/ Version 0.3.1 (2016-10-19) o Add regex_dna benchmark o The run command now fails with an error if no benchmark was run. o genshi, logging, scimark, sympy and xml_etree scripts now run all sub-benchmarks by default o Rewrite pybench using perf: remove the old legacy code to calibrate and run benchmarks, reuse perf.Runner API. o Change heuristic to create the virtual environment, tried commands: o python -m venv o python -m virtualenv o virtualenv -p python o The creation of the virtual environment now ensures that pip works to detect "python3 -m venv" which doesn't install pip. o Upgrade perf dependency from 0.7.12 to 0.8.2: update all benchmarks to the new perf 0.8 API (which introduces incompatible changes) o Update SQLAlchemy from 1.1.1 to 1.1.2 Version 0.3.0 (2016-10-11) New benchmarks: o Add crypto_pyaes: Benchmark a pure-Python implementation of the AES block-cipher in CTR mode using the pyaes module (version 1.6.0). Add pyaes dependency. o Add sympy: Benchmark on SymPy. Add scipy dependency. o Add scimark benchmark o Add deltablue: DeltaBlue benchmark o Add dulwich_log: Iterate on commits of the asyncio Git repository using the Dulwich module. Add dulwich (and mpmath) dependencies. o Add pyflate: Pyflate benchmark, tar/bzip2 decompressor in pure Python o Add sqlite_synth benchmark: Benchmark Python aggregate for SQLite o Add genshi benchmark: Render template to XML or plain text using the Genshi module. Add Genshi dependency. o Add sqlalchemy_declarative and sqlalchemy_imperative benchmarks: SQLAlchemy Declarative and Imperative benchmarks using SQLite. Add SQLAlchemy dependency. Enhancements: o compare command now fails if the performance versions are different o nbody: add --reference and --iterations command line options. o chaos: add --width, --height, --thickness, --filename and --rng-seed command line options o django_template: add --table-size command line option o json_dumps: add --cases command line option o pidigits: add --digits command line option o raytrace: add --width, --height and --filename command line options o Port html5lib benchmark to Python 3 o Enable pickle_pure_python and unpickle_pure_python on Python 3 (code was already compatible with Python 3) o Creating the virtual environment doesn't inherit environment variables (especially PYTHONPATH) by default anymore: --inherit-environ command line option must now be used explicitly. Bugfixes: o chaos benchmark now also reset the random module at each sample to get more reproductible benchmark results o Logging benchmarks now truncate the in-memory stream before each benchmark run Rename benchmarks: o Rename benchmarks to get a consistent name between the command line and benchmark name in the JSON file. o Rename pickle benchmarks: o slowpickle becomes pickle_pure_python o slowunpickle becomes unpickle_pure_python o fastpickle becomes pickle o fastunpickle becomes unpickle o Rename ElementTree benchmarks: replace etree_ prefix with xml_etree_. o Rename hexiom2 to hexiom_level25 and explicitly pass --level=25 parameter o Rename json_load to json_loads o Rename json_dump_v2 to json_dumps (and remove the deprecated json_dump benchmark) o Rename normal_startup to python_startup, and startup_nosite to python_startup_no_site o Rename threaded_count to threading_threaded_count, rename iterative_count to threading_iterative_count o Rename logging benchmarks: o silent_logging to logging_silent o simple_logging to logging_simple o formatted_logging to logging_format Minor changes: o Update dependencies o Remove broken --args command line option. Version 0.2.2 (2016-09-19) o Add a new show command to display a benchmark file o Issue #11: Display Python version in compare. Display also the performance version. o CPython issue #26383; csv output: don't truncate digits for timings shorter than 1 us o compare: Use sample unit of benchmarks, format values in the table output using the unit o compare: Fix the table output if benchmarks only contain a single sample o Remove unused -C/--control_label and -E/--experiment_label options o Update perf dependency to 0.7.11 to get Benchmark.get_unit() and BenchmarkSuite.get_metadata() Version 0.2.1 (2016-09-10) o Add --csv option to the compare command o Fix compare -O table output format o Freeze indirect dependencies in requirements.txt o run: add --track-memory option to track the memory peak usage o Update perf dependency to 0.7.8 to support memory tracking and the new --inherit-environ command line option o If virtualenv command fail, try another command to create the virtual environment: catch virtualenv error o The first command to upgrade pip to version >= 6.0 now uses the pip binary rather than python -m pip to support pip 1.0 which doesn't support python -m pip CLI. o Update Django (1.10.1), Mercurial (3.9.1) and psutil (4.3.1) o Rename --inherit_env command line option to --inherit-environ and fix it Version 0.2 (2016-09-01) o Update Django dependency to 1.10 o Update Chameleon dependency to 2.24 o Add the --venv command line option o Convert Python startup, Mercurial startup and 2to3 benchmarks to perf scripts (bm_startup.py, bm_hg_startup.py and bm_2to3.py) o Pass the --affinity option to perf scripts rather than using the taskset command o Put more installer and optional requirements into performance/requirements.txt o Cached .pyc files are no more removed before running a benchmark. Use venv recreate command to update a virtual environment if required. o The broken --track_memory option has been removed. It will be added back when it will be fixed. o Add performance version to metadata o Upgrade perf dependency to 0.7.5 to get Benchmark.update_metadata() Version 0.1.2 (2016-08-27) o Windows is now supported o Add a new venv command to show, create, recrete or remove the virtual environment. o Fix pybench benchmark (update to perf 0.7.4 API) o performance now tries to install the psutil module on CPython for better system metrics in metadata and CPU pinning on Python 2. o The creation of the virtual environment now also tries virtualenv and venv Python modules, not only the virtualenv command. o The development version of performance now installs performance with "pip install -e " o The GitHub project was renamed from python/benchmarks to python/performance. Version 0.1.1 (2016-08-24) o Fix the creation of the virtual environment o Rename pybenchmarks script to pyperformance o Add -p/--python command line option o Add __main__ module to be able to run: python3 -m performance Version 0.1 (2016-08-24) o First release after the conversion to the perf module and move to GitHub o Removed benchmarks o django_v2, django_v3 o rietveld o spitfire (and psyco): Spitfire is not available on PyPI o pystone o gcbench o tuple_gc_hell History Projected moved to https://github.com/python/performance in August 2016. Files reorganized, benchmarks patched to use the perf module to run benchmark in multiple processes. Project started in December 2008 by Collin Winter and Jeffrey Yasskin for the Unladen Swallow project. The project was hosted at https://hg.python.org/benchmarks until Feb 2016 Other Python Benchmarks: o CPython: speed.python.org uses pyperf, pyperformance and Codespeed (Django web application) o PyPy: speed.pypy.org uses PyPy benchmarks o Pyston: pyston-perf and speed.pyston.org o Numba benchmarks o Cython: Cython Demos/benchmarks o pythran: numpy-benchmarks See also the Python speed mailing list and the Python pyperf module (used by pyperformance). pyperformance is not tuned for PyPy yet: use the PyPy benchmarks project instead to measure PyPy performances. Image generated by bm_raytrace (pure Python raytrace): [image: Pure Python raytracer] [image] AUTHOR Victor Stinner COPYRIGHT 2017, Victor Stinner 1.0.6 November 27, 2024 PYTHONPERFORMANCEBENCHMARKSUITE(1)