I implemented the following background processing thread, where Jobs
is a Queue<T>
:
static void WorkThread()
{
while (working)
{
var job;
lock (Jobs)
{
if (Jobs.Count > 0)
job = Jobs.Dequeue();
}
if (job == null)
{
Thread.Sleep(1);
}
else
{
// [snip]: Process job.
}
}
}
This produced a noticable delay between when the jobs were being entered and when they were actually starting to be run (batches of jobs are entered at once, and each job is only [relatively] small.) The delay wasn't a huge deal, but I got to thinking about the problem, and made the following change:
static ManualResetEvent _workerWait = new ManualResetEvent(false);
// ...
if (job == null)
{
lock (_workerWait)
{
_workerWait.Reset();
}
_workerWait.WaitOne();
}
Where the thread adding jobs now locks _workerWait
and calls _workerWait.Set()
when it's done adding jobs. This solution (seemingly) instantly starts processing jobs, and the delay is gone altogether.
My question is partly "Why does this happen?", granted that Thread.Sleep(int)
can very well sleep for longer than you specify, and partly "How does the ManualResetEvent
achieve this level of performance?".
EDIT: Since someone asked about the function that's queueing items, here it is, along with the full system as it stands at the moment.
public void RunTriggers(string data)
{
lock (this.SyncRoot)
{
this.Triggers.Sort((a, b) => { return a.Priority - b.Priority; });
foreach (Trigger trigger in this.Triggers)
{
lock (Jobs)
{
Jobs.Enqueue(new TriggerData(this, trigger, data));
_workerWait.Set();
}
}
}
}
static private ManualResetEvent _workerWait = new ManualResetEvent(false);
static void WorkThread()
{
while (working)
{
TriggerData job = null;
lock (Jobs)
{
if (Jobs.Count > 0)
job = Jobs.Dequeue();
if (job == null)
{
_workerWait.Reset();
}
}
if (job == null)
_workerWait.WaitOne();
else
{
try
{
foreach (Match m in job.Trigger.Regex.Matches(job.Data))
job.Trigger.Value.Action(job.World, m);
}
catch (Exception ex)
{
job.World.SendLineToClient("\r\n\x1B[32m -- {0} in trigger ({1}): {2}\x1B[m",
ex.GetType().ToString(), job.Trigger.Name, ex.Message);
}
}
}
}
The events are kernel primitives provided by the OS/Kernel that's designed just for this sort of things. The kernel provides a boundary upon which you can guarantee atomic operations which is important for synchronization(Some atomicity can be done in user space too with hardware support).
In short, when a thread waits on an event it's put on a waiting list for that event and marked as non-runnable. When the event is signaled, the kernel wakes up the ones in the waiting list and marks them as runnable and they can continue to run. It's naturally a huge benefit that a thread can wake up immediately when the event is signalled, vs sleeping for a long time and recheck the condition every now and then.
Even one millisecond is a really really long time, you could have processed thousands of event in that time. Also the time resolution is traditionally 10ms, so sleeping less than 10ms usually just results in a 10ms sleep anyway. With an event, a thread can be woken up and scheduled immediately