git/t/perf
Derrick Stolee 4b707a6e99 p1500: add is-base performance tests
The previous two changes introduced a commit walking heuristic for finding
the most likely base branch for a given source. This algorithm walks
first-parent histories until reaching a collision.

This walk _should_ be very fast. Exceptions include cases where a
commit-graph file does not exist, leading to a full walk of all reachable
commits to compute generation numbers, or a case where no collision in the
first-parent history exists, leading to a walk of all first-parent history
to the root commits.

The p1500 test script guarantees a complete commit-graph file during its
setup, so we will not test that scenario. Do create a new root commit in an
effort to test the scenario of parallel first-parent histories.

Even with the extra root commit, these tests take no longer than 0.02
seconds on my machine for the Git repository. However, the results are
slightly more interesting in a copy of the Linux kernel repository:

Test
---------------------------------------------------------------
1500.2: ahead-behind counts: git for-each-ref              0.12
1500.3: ahead-behind counts: git branch                    0.12
1500.4: ahead-behind counts: git tag                       0.12
1500.5: contains: git for-each-ref --merged                0.04
1500.6: contains: git branch --merged                      0.04
1500.7: contains: git tag --merged                         0.04
1500.8: is-base check: test-tool reach (refs)              0.03
1500.9: is-base check: test-tool reach (tags)              0.03
1500.10: is-base check: git for-each-ref                   0.03
1500.11: is-base check: git for-each-ref (disjoint-base)   0.07

Signed-off-by: Derrick Stolee <stolee@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-08-14 10:10:06 -07:00
..
repos t/perf: avoid redundant use of cat 2024-03-16 11:08:56 -07:00
.gitignore p7519: add trace logging during perf test 2021-02-16 17:14:34 -08:00
Makefile Makefiles: add "shared.mak", move ".DELETE_ON_ERROR" to it 2022-03-03 14:14:55 -08:00
README t/perf: add 'GIT_PERF_USE_SCALAR' run option 2022-09-02 10:02:56 -07:00
aggregate.perl t/perf/aggregate.perl: tolerate leading spaces 2021-10-04 14:12:28 -07:00
bisect_regression Fix spelling errors in messages shown to users 2019-11-10 16:00:54 +09:00
bisect_run_script perf/bisect_run_script: disable codespeed 2018-05-06 13:04:54 +09:00
config perf: disable automatic housekeeping 2021-10-11 13:17:58 -07:00
lib-bitmap.sh bitmap-lookup-table: add performance tests for lookup table 2022-08-26 10:14:02 -07:00
lib-pack.sh t/perf/lib-pack: use fast-import checkpoint to create packs 2017-11-21 11:07:28 +09:00
min_time.perl
p0000-perf-lib-sanity.sh t/perf: correctly align non-ASCII descriptions in output 2017-04-23 21:33:15 -07:00
p0001-rev-list.sh revision: use a prio_queue to hold rewritten parents 2019-04-04 18:21:54 +09:00
p0002-read-cache.sh t/helper: merge test-read-cache into test-tool 2018-03-27 08:45:47 -07:00
p0003-delta-base-cache.sh t/perf: add basic perf tests for delta base cache 2016-08-23 15:26:16 -07:00
p0004-lazy-init-name-hash.sh p0004: fix prereq declaration 2022-08-19 14:35:28 -07:00
p0005-status.sh tests: fix broken &&-chains in compound statements 2021-12-13 10:29:48 -08:00
p0006-read-tree-checkout.sh read-tree: use 'skip_cache_tree_update' option 2022-11-10 21:49:34 -05:00
p0007-write-cache.sh tests: fix broken &&-chains in compound statements 2021-12-13 10:29:48 -08:00
p0008-odb-fsync.sh core.fsyncmethod: performance tests for batch mode 2022-04-06 13:13:26 -07:00
p0071-sort.sh test-mergesort: use DEFINE_LIST_SORT 2022-07-17 15:20:38 -07:00
p0090-cache-tree.sh cache-tree: add perf test comparing update and prime 2022-11-10 21:49:33 -05:00
p0100-globbing.sh t0000-t3999: detect and signal failure within loop 2021-12-13 10:29:48 -08:00
p1006-cat-file.sh cat-file: skip expanding default format 2022-03-15 10:15:32 -07:00
p1400-update-ref.sh t0000-t3999: detect and signal failure within loop 2021-12-13 10:29:48 -08:00
p1450-fsck.sh fsck: add a performance test 2018-09-12 15:17:46 -07:00
p1451-fsck-skip-list.sh t0000-t3999: detect and signal failure within loop 2021-12-13 10:29:48 -08:00
p1500-graph-walks.sh p1500: add is-base performance tests 2024-08-14 10:10:06 -07:00
p2000-sparse-operations.sh check-attr: integrate with sparse-index 2023-08-11 09:44:52 -07:00
p3400-rebase.sh t0000-t3999: detect and signal failure within loop 2021-12-13 10:29:48 -08:00
p3404-rebase-interactive.sh perf: run "rebase -i" under perf 2016-05-13 11:07:12 -07:00
p4000-diff-algorithms.sh
p4001-diff-no-index.sh diff: don't read index when --no-index is given 2013-12-12 12:23:02 -08:00
p4002-diff-color-moved.sh diff --color-moved: add perf tests 2021-12-09 13:24:05 -08:00
p4205-log-pretty-formats.sh pretty: lazy-load commit data when expanding user-format 2021-01-28 14:07:35 -08:00
p4209-pickaxe.sh perf: add performance test for pickaxe 2021-05-11 12:47:31 +09:00
p4211-line-log.sh sha1_file: use strbuf_add() instead of strbuf_addf() 2017-12-04 10:38:55 -08:00
p4220-log-grep-engines.sh t/perf: add iteration setup mechanism to perf-lib 2022-04-06 13:13:26 -07:00
p4221-log-grep-engines-fixed.sh t/perf: add iteration setup mechanism to perf-lib 2022-04-06 13:13:26 -07:00
p5302-pack-index.sh t/perf: add iteration setup mechanism to perf-lib 2022-04-06 13:13:26 -07:00
p5303-many-packs.sh t0000-t3999: detect and signal failure within loop 2021-12-13 10:29:48 -08:00
p5304-prune.sh prune: use bitmaps for reachability traversal 2019-02-14 15:25:33 -08:00
p5310-pack-bitmaps.sh bitmap-lookup-table: add performance tests for lookup table 2022-08-26 10:14:02 -07:00
p5311-pack-bitmaps-fetch.sh bitmap-lookup-table: add performance tests for lookup table 2022-08-26 10:14:02 -07:00
p5312-pack-bitmaps-revs.sh config: enable `pack.writeReverseIndex` by default 2023-04-13 07:55:46 -07:00
p5326-multi-pack-bitmaps.sh bitmap-lookup-table: add performance tests for lookup table 2022-08-26 10:14:02 -07:00
p5332-multi-pack-reuse.sh t/perf: add performance tests for multi-pack reuse 2023-12-14 14:38:09 -08:00
p5333-pseudo-merge-bitmaps.sh t/perf: implement performance tests for pseudo-merge bitmaps 2024-05-24 11:40:44 -07:00
p5550-fetch-tags.sh p5550: factor out nonsense-pack creation 2017-11-21 11:07:12 +09:00
p5551-fetch-rescan.sh p5551: add a script to test fetch pack-dir rescans 2017-11-21 11:08:20 +09:00
p5600-partial-clone.sh repack: avoid loosening promisor objects in partial clones 2021-04-28 13:36:13 +09:00
p5601-clone-reference.sh t/perf: rename duplicate-numbered test script 2019-08-12 09:05:13 -07:00
p6300-for-each-ref.sh t/perf: add perf tests for for-each-ref 2023-11-16 14:03:01 +09:00
p7000-filter-branch.sh p7000: add test for filter-branch with --prune-empty 2017-03-03 12:43:37 -08:00
p7102-reset.sh reset: use 'skip_cache_tree_update' option 2022-11-10 21:49:34 -05:00
p7300-clean.sh resolve_gitlink_ref: ignore non-repository paths 2016-01-25 11:42:13 -08:00
p7519-fsmonitor.sh Merge branch 'ns/batch-fsync' 2022-06-03 14:30:34 -07:00
p7527-builtin-fsmonitor.sh t: detect and signal failure within loop 2022-08-22 12:53:02 -07:00
p7810-grep.sh
p7820-grep-engines.sh t/perf: add iteration setup mechanism to perf-lib 2022-04-06 13:13:26 -07:00
p7821-grep-engines-fixed.sh perf: amend the grep tests to test grep.threads 2018-01-04 10:24:48 -08:00
p7822-grep-perl-character.sh grep: correctly identify utf-8 characters with \{b,w} in -P 2023-01-18 15:24:52 -08:00
p9210-scalar.sh t/perf: add Scalar performance tests 2022-09-02 10:02:56 -07:00
p9300-fast-import-export.sh fast-import: replace custom hash with hashmap.c 2020-04-06 13:41:24 -07:00
perf-lib.sh Merge branch 'js/update-urls-in-doc-and-comment' 2023-12-18 14:10:12 -08:00
run global: convert trivial usages of `test <expr> -a/-o <expr>` 2023-11-11 09:21:00 +09:00

README

Git performance tests
=====================

This directory holds performance testing scripts for git tools.  The
first part of this document describes the various ways in which you
can run them.

When fixing the tools or adding enhancements, you are strongly
encouraged to add tests in this directory to cover what you are
trying to fix or enhance.  The later part of this short document
describes how your test scripts should be organized.


Running Tests
-------------

The easiest way to run tests is to say "make".  This runs all
the tests on the current git repository.

    === Running 2 tests in this tree ===
    [...]
    Test                                     this tree
    ---------------------------------------------------------
    0001.1: rev-list --all                   0.54(0.51+0.02)
    0001.2: rev-list --all --objects         6.14(5.99+0.11)
    7810.1: grep worktree, cheap regex       0.16(0.16+0.35)
    7810.2: grep worktree, expensive regex   7.90(29.75+0.37)
    7810.3: grep --cached, cheap regex       3.07(3.02+0.25)
    7810.4: grep --cached, expensive regex   9.39(30.57+0.24)

Output format is in seconds "Elapsed(User + System)"

You can compare multiple repositories and even git revisions with the
'run' script:

    $ ./run . origin/next /path/to/git-tree p0001-rev-list.sh

where . stands for the current git tree.  The full invocation is

    ./run [<revision|directory>...] [--] [<test-script>...]

A '.' argument is implied if you do not pass any other
revisions/directories.

You can also manually test this or another git build tree, and then
call the aggregation script to summarize the results:

    $ ./p0001-rev-list.sh
    [...]
    $ ./run /path/to/other/git -- ./p0001-rev-list.sh
    [...]
    $ ./aggregate.perl . /path/to/other/git ./p0001-rev-list.sh

aggregate.perl has the same invocation as 'run', it just does not run
anything beforehand.

You can set the following variables (also in your config.mak):

    GIT_PERF_REPEAT_COUNT
	Number of times a test should be repeated for best-of-N
	measurements.  Defaults to 3.

    GIT_PERF_MAKE_OPTS
	Options to use when automatically building a git tree for
	performance testing. E.g., -j6 would be useful. Passed
	directly to make as "make $GIT_PERF_MAKE_OPTS".

    GIT_PERF_MAKE_COMMAND
	An arbitrary command that'll be run in place of the make
	command, if set the GIT_PERF_MAKE_OPTS variable is
	ignored. Useful in cases where source tree changes might
	require issuing a different make command to different
	revisions.

	This can be (ab)used to monkeypatch or otherwise change the
	tree about to be built. Note that the build directory can be
	re-used for subsequent runs so the make command might get
	executed multiple times on the same tree, but don't count on
	any of that, that's an implementation detail that might change
	in the future.

    GIT_PERF_REPO
    GIT_PERF_LARGE_REPO
	Repositories to copy for the performance tests.  The normal
	repo should be at least git.git size.  The large repo should
	probably be about linux.git size for optimal results.
	Both default to the git.git you are running from.

    GIT_PERF_EXTRA
	Boolean to enable additional tests. Most test scripts are
	written to detect regressions between two versions of Git, and
	the output will compare timings for individual tests between
	those versions. Some scripts have additional tests which are not
	run by default, that show patterns within a single version of
	Git (e.g., performance of index-pack as the number of threads
	changes). These can be enabled with GIT_PERF_EXTRA.

    GIT_PERF_USE_SCALAR
	Boolean indicating whether to register test repo(s) with Scalar
	before executing tests.

You can also pass the options taken by ordinary git tests; the most
useful one is:

--root=<directory>::
	Create "trash" directories used to store all temporary data during
	testing under <directory>, instead of the t/ directory.
	Using this option with a RAM-based filesystem (such as tmpfs)
	can massively speed up the test suite.


Naming Tests
------------

The performance test files are named as:

	pNNNN-commandname-details.sh

where N is a decimal digit.  The same conventions for choosing NNNN as
for normal tests apply.


Writing Tests
-------------

The perf script starts much like a normal test script, except it
sources perf-lib.sh:

	#!/bin/sh
	#
	# Copyright (c) 2005 Junio C Hamano
	#

	test_description='xxx performance test'
	. ./perf-lib.sh

After that you will want to use some of the following:

	test_perf_fresh_repo    # sets up an empty repository
	test_perf_default_repo  # sets up a "normal" repository
	test_perf_large_repo    # sets up a "large" repository

	test_perf_default_repo sub  # ditto, in a subdir "sub"

        test_checkout_worktree  # if you need the worktree too

At least one of the first two is required!

You can use test_expect_success as usual. In both test_expect_success
and in test_perf, running "git" points to the version that is being
perf-tested. The $MODERN_GIT variable points to the git wrapper for the
currently checked-out version (i.e., the one that matches the t/perf
scripts you are running).  This is useful if your setup uses commands
that only work with newer versions of git than what you might want to
test (but obviously your new commands must still create a state that can
be used by the older version of git you are testing).

For actual performance tests, use

	test_perf 'descriptive string' '
		command1 &&
		command2
	'

test_perf spawns a subshell, for lack of better options.  This means
that

* you _must_ export all variables that you need in the subshell

* you _must_ flag all variables that you want to persist from the
  subshell with 'test_export':

	test_perf 'descriptive string' '
		foo=$(git rev-parse HEAD) &&
		test_export foo
	'

  The so-exported variables are automatically marked for export in the
  shell executing the perf test.  For your convenience, test_export is
  the same as export in the main shell.

  This feature relies on a bit of magic using 'set' and 'source'.
  While we have tried to make sure that it can cope with embedded
  whitespace and other special characters, it will not work with
  multi-line data.

Rather than tracking the performance by run-time as `test_perf` does, you
may also track output size by using `test_size`. The stdout of the
function should be a single numeric value, which will be captured and
shown in the aggregated output. For example:

	test_perf 'time foo' '
		./foo >foo.out
	'

	test_size 'output size'
		wc -c <foo.out
	'

might produce output like:

	Test                origin           HEAD
	-------------------------------------------------------------
	1234.1 time foo     0.37(0.79+0.02)  0.26(0.51+0.02) -29.7%
	1234.2 output size             4.3M             3.6M -14.7%

The item being measured (and its units) is up to the test; the context
and the test title should make it clear to the user whether bigger or
smaller numbers are better. Unlike test_perf, the test code will only be
run once, since output sizes tend to be more deterministic than timings.