A page fault is a trap to the software raised by the hardware when a program accesses a page that is mapped in the virtual address space, but not loaded in physical memory. (emphasis mine)
Okay, that makes sense.
But if that's the case, why is it that whenever the process information in Process Hacker is refreshed, I see about 15 page faults?
Or in other words, why is any memory getting paged out? (I have no idea if it's user or kernel memory.) I have no page file, and the RAM usage is about 1.2 GB out of 4 GB, which is after a clean reboot. There's no shortage of any resource; why would anything get paged out?
(I'm the author of Process Hacker.)
Firstly:
A page fault is a trap to the software raised by the hardware when a program accesses a page that is mapped in the virtual address space, but not loaded in physical memory.
That's not entirely correct, as explained later in the same article (Minor page fault). There are soft page faults, where all the kernel needs to do is add a page to the working set of the process. Here's a table from the Windows Internals book (I've excluded the ones that result in an access violation):
Page faults can occur for a variety of reasons, as you can see above. Only one of them has to do with reading from the disk. If you try to allocate a block from the heap and the heap manager allocates new pages, then accesses those pages, you'll get a demand-zero page fault. If you try to hook a function in kernel32 by writing to kernel32's pages, you'll get a copy-on-write fault because those pages are silently being copied so your changes don't affect other processes.
Now to answer your question more specifically: Process Hacker only seems to have page faults when updating its service information - that is, when it calls EnumServicesStatusEx, which RPCs to the SCM (services.exe). My guess is that in the process, a lot of memory is being allocated, leading to demand-zero page faults (the service information requires several pages to store, IIRC).