Production servers have a habit of misbehaving in ways that standard monitoring tools don’t catch. CPU percentages look fine, memory is stable, error rates are low — and yet something feels off. Response times creep up under load, the event loop starts lagging, and you can’t point to a reason.
This is the story of how we investigated exactly that problem on a high-traffic Angular SSR storefront, the low-level OS metric that finally gave us a clear signal, and the surprisingly deep rabbit hole we had to dig through to get that signal wired into our application.
It is also a recipe. Every obstacle we hit — and there were six of them, at six different layers of the Angular SSR toolchain — applies to any Node.js native addon you want to load in an Angular 17+ SSR application. If you are trying to integrate hardware access, native crypto, image processing, or any other compiled native library into your Angular server, this walkthrough is for you. We will cover every error you will encounter, in the order you will encounter it, and exactly how to fix each one.
The Problem: Something Is Wrong, but What?
Our application is a multi-tenant e-commerce storefront built with Angular 17+ and server-side rendering. The SSR server handles all incoming requests — rendering Angular components on the server, injecting data, and streaming HTML to the browser. Under normal load it is fast and reliable. Under heavier load, or after certain deploys, it would start to feel sluggish in ways that were hard to pin down.
Standard APM tools — Datadog, New Relic, Azure Application Insights — gave us the usual metrics: request latency, CPU percentage, memory, error rate. All of them looked acceptable. Nothing obviously wrong.
What those tools were not telling us was whether the server process itself was healthy at the OS level. Specifically: was Node.js getting the CPU time it needed when it needed it, or was the OS scheduler constantly interrupting it?
Context switches: the metric APM tools miss
Every running process shares the CPU with other processes. The OS scheduler decides which process runs at any given moment, and it does so by performing context switches — pausing one process, saving its state, and handing the CPU to another.
There are two kinds. A voluntary context switch happens when a process willingly yields the CPU — because it is waiting for a network response, a file read, a timer, or any other I/O. This is completely normal for a Node.js server. The event loop spends most of its life waiting for things, and every await is effectively a voluntary yield.
An involuntary context switch happens when the OS forcibly preempts a process — because its time slice ran out, because a higher-priority process needed the CPU, or because the machine is under enough load that the scheduler has to juggle more aggressively. These are the interesting ones. A high count of involuntary context switches relative to voluntary ones is a sign that the process is not getting CPU time on demand — it is being evicted and waiting to be rescheduled. That shows up as event loop lag, slow response times, and that hard-to-name „something feels off” quality.
The ratio of involuntary to voluntary context switches above roughly 5% is a useful rule of thumb for „worth investigating.” Not a hard threshold — your baseline depends on your workload and machine — but a reliable relative signal. If this ratio starts climbing during a deploy or a traffic spike, something changed.
The kernel exposes this data via the getrusage POSIX system call, available on Linux and macOS. Standard APM agents do not collect it. To get it, you need to go closer to the metal.
Why a Native Addon?
There is no pure JavaScript way to call getrusage. It is a C system call. To access it from Node.js, you need a native addon — a compiled shared library (.node file) that Node.js can load and call into.
Native addons are more common than they might seem. Some well-known examples:
- Hardware and OS access — serial ports, USB devices, GPIO pins on embedded hardware, raw socket access
- Cryptography — bindings to OpenSSL or platform-native crypto for operations where JavaScript implementations are too slow or not FIPS-compliant
- Image and media processing — bindings to libvips, ImageMagick, or FFmpeg for server-side image transformation
- Database drivers — some database clients use native addons to bypass JavaScript overhead for high-throughput operations
- Process monitoring — reading OS-level process statistics like CPU time, memory maps, file descriptor counts, and context switches
In all of these cases, the pattern is the same: you need something the OS or a native library provides, JavaScript cannot reach it directly, and a native addon is the bridge.
For our case, the goal was a lightweight monitor that calls getrusage on a timer and reports context switch counts back to JavaScript, where we could evaluate the ratio and fire an alert to Rollbar if it exceeded our threshold.
Understanding the Pipeline Before You Break It
Before diving into the problems, here is a map of what Angular 17+’s SSR toolchain does to your server code. Each of the six problems lives at a different stage. Without this map, the errors look random.
When you run nx serve or nx build on an Angular SSR application, your code passes through four distinct environments:
┌─────────────────────────────────────────────────────────────────┐
│ nx build / nx serve │
└───────────────────────────────┬─────────────────────────────────┘
│
┌─────────────────▼──────────────────┐
│ 1. esbuild (build time) │
│ │
│ Compiles server TypeScript + deps │
│ into server.mjs. Resolves imports, │
│ bundles assets. Static — no runtime │
│ knowledge. │
└─────────────────┬────────────────────┘
│
nx serve? │ nx build?
┌─────────────────────┼──────────────────────┐
│ │ │
┌─────────▼──────────┐ ┌───────▼────────────┐ ┌───────▼──────────────┐
│ 2. Vite dev server │ │ 3. Prerender worker│ │ 4. Node.js runtime │
│ (serve only) │ │ (build only) │ │ (production) │
│ │ │ │ │ │
│ Intercepts module │ │ Boots server.mjs │ │ Runs server.mjs │
│ loading at runtime │ │ in a worker thread │ │ directly. No Vite, │
│ for HMR. Pre- │ │ to extract routes │ │ no bundler, just │
│ bundles deps into │ │ and pre-render │ │ Node.js + dlopen. │
│ its own cache. │ │ HTML at build time.│ │ │
└─────────────────────┘ └────────────────────┘ └──────────────────────┘
A native .node addon has to survive all four of these environments. That is why there are six problems.
The Addon and the Tooling
procstat-napi
procstat-napi is the addon built for this. It wraps getrusage(RUSAGE_SELF, ...) via the N-API (Node API) interface — the stable, ABI-versioned C API that Node.js exposes for native addon authors. N-API addons compile once and work across Node.js versions without recompilation, because the API is versioned and backward-compatible.
Internally, the addon uses a uv_timer_t — a timer handle from libuv, the async I/O library at the core of Node.js’s event loop — to call getrusage on a configurable interval and push the results back to JavaScript:
import { createMonitor } from 'procstat-napi';
const monitor = createMonitor({ intervalMs: 1000 });
monitor.on('stats', (stats) => {
const ratio = stats.involuntaryContextSwitches /
(stats.voluntaryContextSwitches + stats.involuntaryContextSwitches);
console.log(`Involuntary ratio: ${(ratio * 100).toFixed(1)}%`);
});
The API surface is minimal: on(event, callback) and off(event, callback). The addon also has an AddressSanitizer (ASan) integration for memory leak reporting, which we will come to in Problem 0.
Distribution: prebuildify
Native addons must be compiled from C++ source. Asking every user to compile on install requires a C++ toolchain and adds time to npm install. The better approach is to pre-compile binaries for each supported platform and bundle them inside the npm package.
prebuildify does this. Running prebuildify --napi produces a prebuilds/ folder with platform-specific .node binaries — for example, prebuilds/linux-x64+ia32/procstat-napi.node. Users get pre-compiled binaries, no compilation needed.
Loading: node-gyp-build-esm
The canonical runtime loader for prebuildify-produced binaries is node-gyp-build, but it has a critical flaw in modern bundler contexts: it constructs the .node file path dynamically at runtime. No bundler — esbuild, webpack, Rollup — can statically analyze a path that does not exist until the code runs. The binary is invisible to the build tool.
node-gyp-build has also been dormant since late 2024. So it was forked into node-gyp-build-esm: a dual-format (CJS + ESM) replacement that adds a prebuilds map — a factory function where each require() points to a static, known path:
import { load } from 'node-gyp-build-esm';
const binding = load(import.meta.dirname, () => ({
'linux-x64': () => require('./prebuilds/linux-x64+ia32/procstat-napi.node'),
'darwin-x64': () => require('./prebuilds/darwin-x64+arm64/procstat-napi.node'),
'win32-x64': () => require('./prebuilds/win32-x64+ia32/procstat-napi.node'),
}));
The factory is called lazily — only the matching platform’s thunk executes. esbuild can see all three require() calls at build time, copy the binaries to the output directory, and rewrite the paths. This static analyzability is the foundation that makes everything else possible.
With the addon written, prebuilds compiled, and the loader in place, it was time to wire it into the Angular SSR application. Here is every problem that followed.
Problem 0 — ASan Needs to Go First
Before any Angular-specific issues, there was a prerequisite specific to this addon: procstat-napi is compiled with AddressSanitizer (ASan) enabled. ASan is a memory error detector built into Clang and GCC — it instruments memory allocations and accesses at compile time to catch bugs like use-after-free and buffer overflows, and it can route leak reports back to JavaScript via the addon’s "leak" event.
ASan has one hard requirement: its runtime library (libasan.so on Linux) must be the very first library loaded into the process. It needs to intercept the system memory allocator from process startup. Node.js does not link against libasan.so, so when Node.js calls dlopen — the POSIX system call that loads a shared library into a running process — to open the .node addon, ASan finds itself arriving too late and aborts immediately:
ASan runtime does not come first in initial library list;
you should either link runtime to your application or
manually preload it with LD_PRELOAD.
The fix is LD_PRELOAD, an environment variable that the Linux dynamic linker reads before starting any process. Libraries listed there are loaded before everything else:
LD_PRELOAD=$(gcc -print-file-name=libasan.so) node dist/.../server.mjs
gcc -print-file-name=libasan.so resolves the correct absolute path for the current compiler version portably — more reliable than hardcoding a path like /usr/lib/x86_64-linux-gnu/libasan.so.6. This is required for both nx serve and production. In a container it belongs in your CMD or entrypoint script.
In startThreadMonitor, we actually use LD_PRELOAD as a guard condition before even attempting to load the addon:
if (!global_isServeMode && process.env['LD_PRELOAD']?.includes('asan')) {
const { createMonitor } = await import('procstat-napi');
// ...
}
This is intentional and self-protecting: if someone deploys without LD_PRELOAD, the monitor simply does not start rather than crashing the server.
Applies to: Any addon compiled with AddressSanitizer. If your addon does not use ASan, skip this step.
Problem 1 — esbuild Doesn’t Know What a .node File Is
With LD_PRELOAD in place, the next step was nx serve. The first esbuild error appeared immediately:
No loader is configured for ".node" files:
node_modules/procstat-napi/prebuilds/linux-x64+ia32/procstat-napi.node
node_modules/procstat-napi/index.mjs:24:27:
24 │ './prebuilds/linux-x64+ia32/procstat-napi.node',
The prebuilds map had done its job — esbuild followed the static require() string and found the binary. It just had no idea what to do with a compiled native binary. esbuild handles JavaScript, TypeScript, CSS, and JSON. A .node file is outside its model entirely.
The fix is a plugin, originally documented in esbuild issue #1051. It routes .node files through a virtual namespace, emits a small runtime wrapper, and uses esbuild’s built-in file loader to copy the binary to the output directory:
import { createRequire } from 'node:module';
const require = createRequire(import.meta.url);
const setupNativeNodeModulesPlugin = () => ({
name: 'native-node-modules',
setup(build) {
if (build.initialOptions.platform !== 'node') return;
// Resolve .node imports to absolute paths and move them into
// the "node-file" virtual namespace for custom loading.
build.onResolve({ filter: /\.node$/, namespace: 'file' }, (args) => ({
path: require.resolve(args.path, { paths: [args.resolveDir] }),
namespace: 'node-file',
}));
// Emit a small wrapper that requires the .node file at runtime
// using the path esbuild copies it to in the output directory.
build.onLoad({ filter: /.*/, namespace: 'node-file' }, (args) => ({
contents: `
import path from ${JSON.stringify(args.path)}
try { module.exports = require(path) }
catch {}
`,
}));
// Hand .node files back to the "file" namespace so esbuild's
// default file loader copies them to the output directory.
build.onResolve({ filter: /\.node$/, namespace: 'node-file' }, (args) => ({
path: args.path,
namespace: 'file',
}));
const opts = build.initialOptions;
opts.loader = opts.loader || {};
opts.loader['.node'] = 'file';
},
});
export default setupNativeNodeModulesPlugin;
Register this plugin via the plugins option in angular.json, available from Angular 17 onwards for the application builder.
Problem 2 — The Plugin Alone Isn’t Enough
The plugin did not fully clear the error. The missing piece was externalDependencies in project.json:
"executor": "@nx/angular:application",
"options": {
"externalDependencies": [
"./prebuilds/linux-x64+ia32/procstat-napi.node",
"./prebuilds/darwin-x64+arm64/procstat-napi.node",
"./prebuilds/win32-x64+ia32/procstat-napi.node"
]
}
This maps directly to esbuild’s external configuration on the server bundle. When a path is marked external, esbuild stops trying to process it entirely and preserves the require() call verbatim in the output. The .node file is then resolved at runtime by Node.js’s native module loader, which calls dlopen and knows exactly what to do.
The plugin and externalDependencies serve different purposes and both are needed. The plugin teaches esbuild to copy .node files to the output directory when it encounters them through its normal resolution pipeline. The external config is the hard guarantee that esbuild never tries to bundle or transform those paths regardless of how it encounters them.
Problem 3 — Vite’s Resolver Breaks Relative Paths
With the esbuild configuration sorted, nx serve completed without build errors. Opening localhost produced this in the terminal:
[vite] (ssr) Error when evaluating SSR module ./server.mjs:
Cannot find module './prebuilds/linux-x64+ia32/procstat-napi.node'
Require stack:
- /home/project/.angular/cache/21.2.3/ng21-test/vite/deps_ssr/chunk-6DU2HRTW.js
This is a different problem in a different environment. In nx serve, Angular does not run the esbuild-compiled server.mjs directly. It uses Vite as the dev server for HMR. Vite registers a Node.js module customization hook — a mechanism introduced in Node.js 20 that lets tools intercept the module resolution pipeline:
function customizationHookResolve(specifier, context, nextResolve) {
if (specifier.startsWith(customizationHookNamespace)) {
let data = specifier.slice(42),
[parsedSpecifier, parsedImporter] = JSON.parse(data);
specifier = parsedSpecifier;
context.parentURL = parsedImporter;
}
return nextResolve(specifier, context);
}
module = (await import('node:module')).Module;
module.registerHooks({ resolve: customizationHookResolve });
Vite pre-bundles procstat-napi into its own dependency cache at .angular/cache/.../vite/deps_ssr/chunk-6DU2HRTW.js. From that location, the relative path ./prebuilds/linux-x64+ia32/procstat-napi.node points nowhere — the original node_modules/procstat-napi/ directory is no longer the resolution base.
The standard Vite solution — adding the package to ssr.external in vite.config.ts — is not available because Angular’s dev server does not expose a vite.config.ts.
The workaround is to detect Vite at runtime inside procstat-napi’s index.mjs and switch to absolute paths when it is present. Vite always injects import.meta.env.MODE and import.meta.env.BASE_URL into modules it processes — bare Node.js and esbuild output do not. That distinction is the heuristic:
const isVite =
typeof import.meta !== 'undefined' &&
!!import.meta.env?.MODE &&
!!import.meta.env.BASE_URL;
if (isVite) {
binding = load(import.meta.dirname, () => ({
'linux-x64': () =>
require(join(process.cwd(),
'node_modules/procstat-napi/prebuilds/linux-x64+ia32/procstat-napi.node')),
'darwin-x64': () =>
require(join(process.cwd(),
'node_modules/procstat-napi/prebuilds/darwin-x64+arm64/procstat-napi.node')),
'win32-x64': () =>
require(join(process.cwd(),
'node_modules/procstat-napi/prebuilds/win32-x64+ia32/procstat-napi.node')),
}));
}
An absolute path bypasses Vite’s resolver entirely — Node’s native require hands it straight to dlopen. The process.cwd() anchor assumes Nx runs the dev server from the workspace root, which is reliable in practice but worth noting if your setup differs.
Problem 4 — createMonitor is not a function During Prerender
With nx serve working, the next step was a production build. Running with NG_BUILD_MANGLE=0 (which disables name mangling to keep the output readable) surfaced:
✘ [ERROR] An error occurred while extracting routes.
createMonitor is not a function
This comes from Angular’s static route extraction pass — a build-time step where Angular boots the compiled server bundle in a Node.js worker thread, walks all routes, and pre-renders HTML. The worker thread environment is different enough from the production server that the addon binding came back broken, and since the addon was imported at the top level of server.ts, the failure aborted route extraction entirely.
The solution is to make the monitor initialization lazy — wrapped in a dynamic import() — and to guard it with a try-catch that inspects the error stack. When the addon fails inside Angular’s prerender worker, the call stack contains prerender-root, an artifact of how Angular names its worker thread entry point:
async function startMonitor() {
try {
const { createMonitor } = await import('procstat-napi');
const monitor = createMonitor({ intervalMs: 1000 });
monitor.on('stats', (stats) => {
console.log('stats = ', stats);
});
} catch (error) {
if (error instanceof Error && error.stack?.includes('prerender-root')) {
// We're inside Angular's prerender/route-extraction worker — the addon
// cannot load here and isn't needed. Swallow silently.
return;
}
throw error;
}
}
startMonitor();
Rather than predicting the prerender context upfront, you let the failure happen, confirm it came from the right place, and only then discard it. A genuine error — missing prebuild, wrong architecture, corrupted binary — still propagates. The prerender-root string is an internal Angular detail and could change, but if it does the failure mode is a thrown error rather than a silently swallowed real problem.
For completeness, there is also a pre-emptive alternative: Angular’s prerender worker sets NG_ALLOWED_HOSTS=localhost in its environment, which can be used as an early-exit heuristic for the happy path:
async function startMonitor() {
// NG_ALLOWED_HOSTS is set by Angular's internal render worker — not a stable
// public API. Used here as a best-effort happy-path heuristic only.
if (process.env['NG_ALLOWED_HOSTS']) return;
const { createMonitor } = await import('procstat-napi');
// ...
}
The tradeoff: if Angular ever removes that variable, the guard silently disappears. The try-catch approach fails loudly if its assumption breaks.
Problem 5 — Duplicate createRequire Binding Collision
With the prerender guard in place, the production build completed — but there was a runtime failure lurking in the compiled output. Angular’s application builder injects a banner snippet at the top of every server bundle to polyfill require in ESM context, because esbuild does not provide a require shim for ESM output (see esbuild issue #1921):
// Injected by Angular CLI at the top of server.mjs
import { createRequire } from 'node:module';
globalThis['require'] ??= createRequire(import.meta.url);
This banner is injected as a raw string outside the module graph — esbuild cannot deduplicate it against other imports. Both node-gyp-build-esm and procstat-napi also import createRequire from node:module in their own code. In the flat bundled ESM output, multiple import { createRequire } from 'node:module' declarations appear at the same lexical scope — a binding collision that breaks at runtime.
The fix was a one-line change to Angular CLI, renaming the injected binding to a namespaced identifier:
// Before
import { createRequire } from 'node:module';
globalThis['require'] ??= createRequire(import.meta.url);
// After — PR #32765
import { createRequire as __ngCreateRequire } from 'node:module';
globalThis['require'] ??= __ngCreateRequire(import.meta.url);
The PR is at angular/angular-cli#32765. This is not a procstat-napi-specific bug — any native addon or library that imports createRequire directly would hit the same collision.
Problem 6 — The .node Files Land in the Wrong Output Directory
After a successful build, running the production server revealed one final problem: the .node binaries were missing at runtime.
Angular’s application builder runs esbuild in two separate passes — one for the browser bundle and one for the server bundle. The esbuild plugin copies .node files to the output directory of whichever pass is running. The browser pass runs first and picks up the binaries, landing them at:
dist/apps/APP_NAME/browser/media/procstat-napi-KKPURPS6.node
The server bundle’s compiled require('./media/procstat-napi-KKPURPS6.node') expects them at the same relative path from the server output directory — dist/apps/APP_NAME/server/media/ — which does not exist.
There is no way to redirect the file loader’s output path from inside the esbuild plugin. The solution is a post-build script:
// scripts/copy-node-addons/index.js
import fs from 'node:fs';
import path from 'node:path';
import { output, workspaceRoot } from '@nx/devkit';
import { sync } from 'glob';
const appName = process.env.NX_TASK_TARGET_PROJECT;
const addons = sync(
path.join(workspaceRoot, `dist/apps/${appName}/browser/media/*.node`)
);
if (
!fs.existsSync(path.join(workspaceRoot, `dist/apps/${appName}/server/media`))
) {
fs.mkdirSync(
path.join(workspaceRoot, `dist/apps/${appName}/server/media`),
{ recursive: true }
);
output.log({ title: `Created dist/apps/${appName}/server/media folder` });
}
// Native addons are build-target-agnostic — the same binary serves both.
for (const addon of addons) {
fs.copyFileSync(addon, addon.replace('browser', 'server'));
output.log({ title: `Copied addon into ${addon.replace('browser', 'server')}` });
}
Wired into Nx as a build-with-deps target that depends on the main build:
"build-with-deps": {
"executor": "nx:run-commands",
"dependsOn": [{ "target": "build", "params": "forward" }],
"options": {
"commands": ["node scripts/copy-node-addons"]
}
}
Run with yarn nx build-with-deps app-name.
What the Monitoring Actually Found
After working through all six problems, the addon was running in production. Within the first monitoring interval, Rollbar received this:
voluntaryContextSwitches: 725
involuntaryContextSwitches: 358
A ratio of 33% — roughly six times above the 5% threshold we considered worth investigating. The monitoring had caught something real immediately.
The culprit, once we started looking, was orphaned SSR renders. When a user closes a browser tab or navigates away mid-request, the client connection drops. But the Node.js server, by default, has no way of knowing the client is gone — it keeps rendering the full Angular page, fetching data from APIs, building the HTML response, and then discarding it because there is nobody to receive it. Each of those abandoned renders burns CPU time, keeps the event loop busy, and creates exactly the kind of unnecessary CPU churn that drives up involuntary context switches.
The fix was abort signal propagation. When the underlying TCP socket closes, an AbortController fires, which is passed to angularNodeAppEngine.handle() as abortSignal. Inside the Angular application, a function reads that signal from the request context and destroys the Angular platform when it fires, stopping the render in progress:
// In server.ts — detect client disconnect and abort the render
const abortController = new AbortController();
const onClose = () => {
if (abortController.signal.aborted) return;
abortController.abort();
};
req.on('close', onClose);
req.socket.on('close', onClose);
angularNodeAppEngine.handle(req, {
abortSignal: abortController.signal,
// ...
});
// In app.component.ts — respond to the abort signal inside Angular
function abortOnPlatformDestroy() {
const context = inject<any>(REQUEST_CONTEXT);
const abortSignal: AbortSignal | undefined = context.abortSignal;
if (abortSignal == null) return;
const platform = inject(PlatformRef);
const onAbort = () => {
if (platform.destroyed) return;
queueMicrotask(() => {
if (platform.destroyed) return;
platform.destroy(); // stops the render, cleans up DI, frees resources
});
};
if (abortSignal.aborted) {
onAbort();
return;
}
abortSignal.addEventListener('abort', onAbort);
platform.onDestroy(() => abortSignal.removeEventListener('abort', onAbort));
}
After deploying the abort propagation, the involuntary context switch ratio dropped back below the 5% threshold and the Rollbar warnings stopped. The monitoring had identified a real inefficiency, the fix was targeted, and the signal validated that it worked.
Without the context switch monitoring, we would have had no clear way to see this problem, let alone confirm that the fix resolved it. CPU percentage remained acceptable throughout — the burden was subtle enough to not show up in standard metrics but significant enough to affect server behaviour under load.
The Complete Picture
What started as „call getrusage from Node.js” turned into a seven-stop tour through the Angular SSR toolchain. The table below is a checklist — applicable to any N-API addon, not just procstat-napi:
| # | Error | Pipeline stage | Fix |
|---|---|---|---|
| 0 | ASan runtime does not come first |
OS dynamic linker | LD_PRELOAD=$(gcc -print-file-name=libasan.so) |
| 1 | No loader configured for ".node" files |
esbuild build time | esbuild native-node-modules plugin |
| 2 | Plugin alone not enough | Nx/Angular builder config | externalDependencies in project.json |
| 3 | Cannot find module in dev server |
Vite SSR module resolver | Vite detection heuristic + process.cwd() absolute paths |
| 4 | createMonitor is not a function during prerender |
Angular prerender worker | try-catch with prerender-root stack check + dynamic import() |
| 5 | Duplicate createRequire binding collision |
Angular CLI banner injection | PR #32765 to Angular CLI |
| 6 | .node files in browser/media/, not server/media/ |
esbuild dual-pass output | Post-build copy script |
Problem 0 only applies if your addon is compiled with ASan. Problems 1–6 apply to any N-API addon in any Angular 17+ SSR application using @angular/ssr. The fixes are independent — apply them one at a time as you encounter each error.
None of this was documented anywhere. Most of it required reading Angular CLI, Vite, and esbuild source code to understand. One problem required a patch to Angular CLI itself. The effort was worth it: the monitoring caught a real production issue on its first run that standard APM tooling had been missing entirely.
Hopefully this article means the next person to bring a native addon into an Angular SSR application has a map before they start.
Repositories
procstat-napi— N-API addon for process statistics viagetrusagenode-gyp-build-esm— ESM-compatible fork ofnode-gyp-buildwith static prebuilds map support- angular/angular-cli#32765 — the
createRequirenaming collision fix