Setup
az account set --subscription "Azure subscription 1"
az configure --defaults group=simpleLinuxTestVMResourceGroup location=eastus
az group create --resource-group simpleLinuxTestVMResourceGroup --location eastus| # Requires gnu find for -files0-from | |
| # Requires elfx86exts | |
| nix path-info -r $1 \ | |
| | tr '\n' '\0' \ | |
| | find -L -files0-from - -type f -exec sha256sum {} \+ \ | |
| | sed -E 's/([a-f0-9]+) (\/nix\/store\/[a-z0-9]{32}-(.+))/\3 \1 \2/' \ | |
| | sort -s -k 1 \ | |
| | xargs -n1 -d '\n' bash -c \ | |
| ' |
| Sourcing python-imports-check-hook.sh | |
| Using pythonImportsCheckPhase | |
| source: sourcing cudaHook.bash (hostOffset=-1) (targetOffset=0) | |
| source: added cudaHookRegistration to prePhases | |
| source: added cudaSetupCudaToolkitRoot to envHooks for hostOffset=-1 | |
| source: sourcing nvccHook.bash (hostOffset=-1) (targetOffset=0) | |
| source: added nvccHookRegistration to prePhases | |
| Sourcing fix-elf-files.sh | |
| source: sourcing cudaHook.bash (hostOffset=0) (targetOffset=1) | |
| source: added cudaHookRegistration to prePhases |
| nix-eval-jobs --flake .#hydraJobs.sm_89.x86_64-linux --store local --constituents | \ | |
| jq -cr '.constituents + [.drvPath] | .[] | select(.!=null) + "^*"' | \ | |
| nom build --keep-going --no-link --print-out-paths --stdin |
This was run on quiet system with an i9-13900K, 96 GB of DDR5 RAM, and several NVMe SSDs in a ZRAID0 with ZFS. Memory usage never required swapping with hyperfine running two warmups, ZFS ARC and L2ARC effectively cached everything, so IO was not a bottleneck. While it is unlikely there was any thermal throttling as this was largely single-core and temps stayed under 30 C, it is possible.
Additionally, I did not disable boost clocks, set affinity or priority for the process, or other optimizations which could have been done to reduce noise in the results.
They're presented without any liability or guarantee of accuracy, etc.
The workload used isn't representative of a typical Nix evaluation (essentially recursing over Nixpkgs to build a list of lists of attribute paths to all derivations which successfully evaluate), so the numbers aren't indicative of performance each build of Nix would see in practice.
Frustrated with lack of introspection into cost of evaluating a single attribute.
Sad nix-community/nix-eval-jobs#251 is not implemented.
Using https://github.com/ConnorBaker/cuda-packages/tree/main at commit 371d9ee216cde9060a8528c447ee5cc6fd230296:
$ nix eval --allow-unsafe-native-code-during-evaluation --json .#legacyPackages.x86_64-linux.adaCudaPackagesDrvAttrEval | jqMost of this is summarized from https://github.com/NixOS/nixpkgs/blob/bc061915ac2cf395e4d1f021cb565f495edfa24a/doc/stdenv/cross-compilation.chapter.md.
In Nixpkgs, dependency offsets are the relative positioning of dependencies in the build process with respect to the build, host, and target platforms. In the case we're not doing cross-compilation, the build, host, and target platforms are the same, and none of this really matters. (Although, with strictDeps = true it can be important as you're telling Nix to enforce separation between build-time and run-time dependencies.)
Some key points:
| function human_readable(size) { | |
| split("B KB MB GB TB PB", units); # Include PB in the units array to avoid separate handling | |
| unit_idx = 1; # Start with bytes | |
| while (size >= 1024 && unit_idx < length(units)) { | |
| size /= 1024; | |
| unit_idx++; | |
| } | |
| return sprintf("%.6f %s", size, units[unit_idx]); |
| { | |
| allowAliases = false; | |
| allowBroken = false; | |
| allowUnfree = true; | |
| checkMeta = true; | |
| cudaSupport = true; | |
| cudaCapabilities = [ "7.5" ]; | |
| packageOverrides = | |
| pkgs: | |
| let |
| #!/usr/bin/env bash | |
| set -euo pipefail | |
| json_cuda_packages_categorized=$(nix eval --impure --json .#cudaPackages --apply ' | |
| attrs: | |
| let | |
| drvKind = drvName: | |
| let | |
| drv = attrs.${drvName}; |