commit 35f035d99ac8658975560d74f3c8f4db5befe609
parent b1024f2eb8e527053b5eee3c5940203bd716a8e8
Author: Ivan Gankevich <igankevich@ya.ru>
Date: Mon, 8 May 2017 11:55:26 +0300
Spell-check.
Diffstat:
2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/src/intro.tex b/src/intro.tex
@@ -44,11 +44,11 @@ proposed approach and conclude the paper in Section 6.
The research for new fault-tolerance methods is motivated by the fact that the
size of large-scale computing systems (clusters and supercomputers) approaches
critical point, where the number of nodes is so large that probability of all
-nodes simultaneously working without a faulure is tends to nought. In other
-words, in future large-scale systems it is highly probable that a parallel
-application experience node failure throughout its execution, and tolerating
-this failure in a transparent way and without checkpointts will increase
-performance of future parallel applications.
+nodes simultaneously working without a faulure tends to nought. In other words,
+in future large-scale systems it is highly probable that a parallel application
+experience node failure throughout its execution, and tolerating this failure
+in a transparent way and without checkpointts will increase performance of
+future parallel applications.
\section{Related work}
@@ -56,7 +56,7 @@ Dynamic role assignment is an emerging trend in design of distributed
systems~\citep{ostrovsky2015couchbase,divya2013elasticsearch,boyer2012glusterfs,anderson2010couchdb,lakshman2010cassandra},
however, it is still not used in big data job schedulers. For example, in
popular YARN~\citep{vavilapalli2013yarn} and Spark~\citep{zaharia2012resilient}
-job schedulers, which is used by Hadoop and Spark big data analysis frameworks,
+job schedulers, which are used by Hadoop and Spark big data analysis frameworks,
master and slave roles are static. Failure of a slave node is tolerated by
restarting a part of a job on a healthy node, and failure of a master node is
tolerated by setting up standby reserved server~\citep{murthy2011architecture}.
diff --git a/src/sections.tex b/src/sections.tex
@@ -105,8 +105,8 @@ maximise efficiency of a programme:
communication of adjacent nodes.
\end{itemize}
-Thus, control flow objects (or kernels) possess properties of both cooperative
-routines and event handlers.
+Thus, kernels possess properties of both cooperative routines and event
+handlers.
\subsection{Fail over model}
@@ -255,7 +255,7 @@ checkpoints. The advantage is that they
\item and they use memory of a subordinate node instead of stable storage.
\end{itemize}
-\section{Results}
+\section{Evaluation}
\label{sec:results}
Master node fail over technique is evaluated on the example of wave energy
@@ -301,7 +301,7 @@ cores of a node. Processing begins with joining corresponding measurements for
each spectrum variables into a tuple, then for each tuple frequency-directional
spectrum is reconstructed and its variance is computed. Results are gradually
copied back to the machine where the application was executed and when the
-processing is complete the programme terminates. A data processing pipeling
+processing is complete the programme terminates. A data processing pipeline
corresponding to the algorithm is presented in fig.~\ref{fig:pipeline}.
\begin{figure}