* docs: memory profile scripts * chore: typo * chore: comment * Apply suggestions from code review Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * chore: newline eof --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Memory Analysis Process
This section will guide you through the process of analyzing memory usage for greptimedb.
-
Get the
jeproftool script, see the next section("Getting thejeproftool") for details. -
After starting
greptimedb(with env varMALLOC_CONF=prof:true), execute thedump.shscript with the PID of thegreptimedbprocess as an argument. This continuously monitors memory usage and captures profiles when exceeding thresholds (e.g. +20MB within 10 minutes). Outputsgreptime-{timestamp}.gproffiles. -
With 2-3 gprof files, run
gen_flamegraph.shin the same environment to generate flame graphs showing memory allocation call stacks. -
NOTE: The
gen_flamegraph.shscript requiresjeprofand optionallyflamegraph.plto be in the current directory. If needed to gen flamegraph now, run theget_flamegraph_tool.shscript, which downloads the flame graph generation toolflamegraph.plto the current directory. The usage ofgen_flamegraph.shis:Usage: ./gen_flamegraph.sh <binary_path> <gprof_directory>where<binary_path>is the path to the greptimedb binary,<gprof_directory>is the directory containing the gprof files(the directorydump.shis dumping profiles to). Example call:./gen_flamegraph.sh ./greptime .Generating the flame graph might take a few minutes. The generated flame graphs are located in the
<gprof_directory>/flamegraphsdirectory. Or if noflamegraph.plis found, it will only contain.collapsefiles which is also fine. -
You can send the generated flame graphs(the entire folder of
<gprof_directory>/flamegraphs) to developers for further analysis.
Getting the jeprof tool
there are three ways to get jeprof, list in here from simple to complex, using any one of those methods is ok, as long as it's the same environment as the greptimedb will be running on:
- If you are compiling greptimedb from source, then
jeprofis already produced during compilation. After runningcargo build, executefind_compiled_jeprof.sh. This will copyjeprofto the current directory. - Or, if you have the Rust toolchain installed locally, simply follow these commands:
cargo new get_jeprof
cd get_jeprof
Then add this line to Cargo.toml:
[dependencies]
tikv-jemalloc-ctl = { version = "0.6", features = ["use_std", "stats"] }
then run:
cargo build
after that the jeprof tool is produced. Now run find_compiled_jeprof.sh in current directory, it will copy the jeprof tool to the current directory.
- compile jemalloc from source you can first clone this repo, and checkout to this commit:
git clone https://github.com/tikv/jemalloc.git
cd jemalloc
git checkout e13ca993e8ccb9ba9847cc330696e02839f328f7
then run:
./configure
make
and jeprof is in .bin/ directory. Copy it to the current directory.