iccsa-16-factory-extended

git clone https://git.igankevich.com/iccsa-16-factory-extended.git
Log | Files | Refs

commit 37a2ec994299227ee5dbd3d38fa1efe41fd52e16
parent 6006ab436f029ff19d3a7ad2d06398abec88ff92
Author: Ivan Gankevich <igankevich@ya.ru>
Date:   Mon, 13 Feb 2017 10:56:10 +0300

Correct the text with proselint.

Diffstat:
src/sections.tex | 18+++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/src/sections.tex b/src/sections.tex @@ -14,8 +14,8 @@ Given a programme that is sequential and large enough to be decomposed into several sequential steps, the simplest way to make it run faster is to exploit data parallelism. Usually it means finding multi-dimensional arrays and loops that access their elements and trying to make them parallel. After transforming -several loops the programme will still have the same number of sequential steps, -but every step will (ideally) be internally parallel. +several loops the programme will still have the same number of sequential +steps, but every step will (ideally) be internally parallel. After that the only possibility to speedup the programme is to overlap execution of code blocks that work with different hardware devices. The most common @@ -102,9 +102,9 @@ maximise efficiency of a programme: of the corresponding routine. As a result, upon a failure of a routine all of its subroutines must be restarted. This encourages a programmer to construct \begin{itemize} - \item deep tree hierarchies of tightly-coupled kernels (which communicate + \item deep tree hierarchies of tightly coupled kernels (which communicate on the same level of hierarchy) to reduce overhead of recomputation; - \item fat tree hierarchies of loosely-coupled kernels, providing maximal + \item fat tree hierarchies of loosely coupled kernels, providing maximal degree of parallelism. \end{itemize} Deep hierarchy is not only the requirement of technology; it helps optimise @@ -174,7 +174,7 @@ the I/O performance is more predictable, than one of the hybrid approach with streaming files over the network. The main purpose of the model is to simplify the development of distributed batch -processing applications and middleware. The main focus is to make an application +processing applications and middleware. The focus is to make an application resilient to failures, i.e. make it fault tolerant and highly available, and do it transparently to a programmer. The implementation is divided into two layers: the lower layer consists of routines and classes for single node @@ -216,7 +216,7 @@ before nested procedure call, and \texttt{react} method is a sequence of processor instructions after the call. Constructing and sending subordinate kernels to the pipeline is nested procedure call. Two methods are necessary to make calls asynchronous, and replace active wait for completion of subordinate -kernels with passive one. Pipelines, in turn, allow implementing passive wait, +kernels with passive one. Pipelines, in turn, allow implementing passive wait, and call correct kernel methods by analysing their internal state. \subsection{Handling master node failures} @@ -391,14 +391,14 @@ restore it is to restart the application. Computational kernels are means of abstraction that decouple a distributed application from physical hardware: it does not matter how many nodes are online for an application to run successfully. Computational kernels eliminate the need -to allocate a physical backup node to make the master node highly-available, with +to allocate a physical backup node to make the master node highly available, with computational kernels approach any node can act as a backup one. Finally, computational kernels can handle subordinate node failures in a way that is transparent to a programmer. The disadvantage of this approach is evident: there is no way of making existing -middleware highly-available without rewriting their source code. Although, our +middleware highly available without rewriting their source code. Although, our programming framework is lightweight, it is not easy to map architecture of existing middleware systems to it: most systems are developed keeping in mind static assignment of server/client roles, which is not easy to make dynamic. -Hopefully, our approach will simplify design of future middleware systems. +Nevertheless, our approach may simplify design of future middleware systems.