The time is stored in a signed 64 bit integer with nanosecond accuracy. This eliminates the possibility of floating-point inaccuracies.
`monotonic_t` can currently hold values large enough to work correctly for more than 200 years into the future.
Using a typedef instead of directly using `int64_t` everywhere will also allow easily changing the datatype in the future should the need arise for more precise or bigger time values.
Also reduce input latency by ignoring repaint_delay when
there is actual pending input.
Gets rid of request_tick_callback(). Now empty events
result in the tick callback being called so there is only a
single mechanism for waking up the main loop and getting
the tick callback called.
This matches behavior on macOS. Had initially set the code to process
on every loop tick in an attmept to workaround the issue of the event
loop freezing on X11 until an X event is delivered. However, in light
of #1782 that workaround was incorrect anyway. Better to have similar
behavior across platforms. This also has the advantage of reducing CPU
consumption.
Also add a simple program to test event loop wakeups.
On Linux, just call the tick callback on every loop tick. This is much
simpler, and should fix the issue with screen updates sometimes getting
stuck waiting for an X11 event.
Note that this was what used to happen (global state being checked on
every loop tick) before the refactoring to use a GLFW event loop,
therefore there should be no performance regressions, though we
of course end up checking global state on every group of events on
Linux, instead of only when something of interest happens. I suspect, to
achieve the latter is going to require implementing a mutex/lock in the
main loop to avoid races, which doesn't seem worth it.
This should make tracking down the root cause of the
event loop pauses on X11 easier. And the infrastructure
should come in handy in the future as well.