commit cac0d3e620473877830fed11f37de0ab6c6dcc2a
parent 6ed0f3340c58eecdf344bf97843277ebda1d9eca
Author: Yuri Tipikin <yuriitipikin@gmail.com>
Date: Tue, 21 Feb 2017 23:07:43 +0300
+ images w/o testing
Diffstat:
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/src/body.tex b/src/body.tex
@@ -117,7 +117,10 @@ existing or newly appeared daemons accordingly to each mentioned scenarios.
Consider the first scenario. In accordance to principal-to-subordinate
hierarchy, there are two variants of this failure: then principal is gone and
-then any subordinate is gone. Subordinate itself is not a valuable part of
+then any subordinate is gone.
+\includegraphics[scale=0.33]{img/sc1}
+\includegraphics[scale=0.33]{img/sc12}
+Subordinate itself is not a valuable part of
execution, it is a simple worker. Our scheduler not stored any subordinate
states, but only principle state. Thus, to restore execution, scheduler finds
last valid principle state and simply recreate failed subordinate on most
@@ -134,7 +137,10 @@ kernels mostly a dual kernel. For a higher level kernels it seems like a
subordinate, for rest lower kernels it is a principal. Thus, we need to add to
our restoration scope only a state of principals principle. As a result, we add
to variants from first scenario situation,a one where principals principal also
-is gone. Since scheduler through daemons knew all kernels state before it begin
+is gone.
+\includegraphics[scale=0.33]{img/sc2}
+\includegraphics[scale=0.33]{img/sc3}
+Since scheduler through daemons knew all kernels state before it begin
a restoration process, first it will check state of principals principle. If
it's gone, all subordinates will be started accordingly to hierarchy once again,
despite their states.
@@ -144,8 +150,8 @@ in memory and will not stop execution of whole task if some part of it was
placed on failed node. But occasionally, all nodes of cluster may fail at same
time. That case is describe in third scenario. The main difference of this case
is a log usage. Log is stored on trusted storage and contains kernel states at a
-beginning of execution and each 'updated' state. By term 'updated' state we
-define principal state after subordinates \Method{React} calls. Files of
+beginning of execution and each <<updated>> state. By term <<updated>> state we
+define principal state after subordinates \Method{react} calls. Files of
execution log is individual for each daemon, but have replicas on selected
number of nodes to provide hardware redundancy. Scheduler at startup have empty
memory, so we develop a procedure of state restoration from log as follows: