path: root/src/server/events/Disconnect.hpp
diff options
authorDavid Robillard <>2016-10-01 15:18:09 -0400
committerDavid Robillard <>2016-10-02 12:24:57 -0400
commita172e76897157e5a0d2ebd3fa3f7f77ec38a5df0 (patch)
treeafc1962fc5123ff8ad4558912e69227bca2a4192 /src/server/events/Disconnect.hpp
parent5c4356827e51b3d6e1256a050e6273a87728d588 (diff)
Defer graph compilation in atomic bundles
This avoids situations like compiling a graph hundreds of times when it is loaded because it has hundreds of nodes and each event triggers a re-compile. This speeds things up dramatically, but exacerbates the theoretical problem of there not being enough time in a cycle to execute a bundle. As far as I can tell, the execute phase of events is very fast, so hundreds or thousands can easily run in a tiny fraction of the process cycle, but this still needs resolution to be truly hard real-time. What probably needs to happen is that all context and state used to process is moved to CompiledGraph and nodes do not access their own fields at all, but have some references into the CompiledGraph. This way, a compiled graph is separate from its "source code", and an old one could continue to be run while a new one is beng applied across several cycles.
Diffstat (limited to 'src/server/events/Disconnect.hpp')
1 files changed, 1 insertions, 1 deletions
diff --git a/src/server/events/Disconnect.hpp b/src/server/events/Disconnect.hpp
index 69d9469c..19ffcf3b 100644
--- a/src/server/events/Disconnect.hpp
+++ b/src/server/events/Disconnect.hpp
@@ -54,7 +54,7 @@ public:
- bool pre_process();
+ bool pre_process(PreProcessContext& ctx);
void execute(RunContext& context);
void post_process();
void undo(Interface& target);