commit 464f90387c370a0249f34598594fc33fcee8ccdb
parent e053b723cdc2ef42f1ed4cdbd86001c674c7bde4
Author: Ivan Gankevich <igankevich@ya.ru>
Date: Sat, 30 Mar 2019 13:33:51 +0300
Spell-check.
Diffstat:
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/main.tex b/main.tex
@@ -377,7 +377,7 @@ choose single precision in all benchmarks.
\end{table}
Double precision was used only for computing autoregressive model coefficients,
-because roundoff and truncation numerical errors make covariance matrices (from
+because round-off and truncation numerical errors make covariance matrices (from
which coefficients are computed) non-positive definite. These matrices
typically have very large condition numbers, and linear system which they
represent cannot be solved by Gaussian elimination or \(LDLT\) Cholesky
@@ -587,7 +587,7 @@ pressure is computed. These routines are much more compute intensive in
comparison to wetted surface determination and pressure force computation
(performance of which depends on the number of panels).
-Despite the fact that Capybara has the highest floating-point performanace
+Despite the fact that Capybara has the highest floating-point performance
across all workstations in the benchmarks, Virtual testbed runs faster on its
processor, not the graphical accelerator. Routine-by-routine investigation
showed that it simply slower at computing even fully parallel Stokes wave
@@ -617,17 +617,17 @@ both a processor and a graphical accelerator) is similar to approach followed by
the authors of Spark distributed data processing
framework~\cite{zaharia2016spark}. In this framework data is first loaded into
the main memory of each cluster node and then processed in a loop. Each
-iteration of this loop runs by all nodes in parallel and syncrhonisation occurrs
+iteration of this loop runs by all nodes in parallel and synchronisation occurs
at the end of each iteration. This is in contrast to MapReduce
framework~\cite{dean2008mapreduce} where after each iteration the data is
written to stable storage and then read back into the main memory to continue
-prcoessing. Not interacting with slow stable storage on every iteration allows
+processing. Not interacting with slow stable storage on every iteration allows
Spark to achieve an order of magnitude higher performance than Hadoop
(open-source version of MapReduce) on iterative algorithms.
On a heterogeneous node an analogue of stable storage, read/writes to which is
-much slower than accesses to the main memory, is graphical acelerator memory. To
-minimise interaction with this memory, we do not read intermeidate results of
+much slower than accesses to the main memory, is graphical accelerator memory. To
+minimise interaction with this memory, we do not read intermediate results of
our computations from it, but reuse arrays that already reside there. (As a
concrete example, we do not copy pressure field from a graphical accelerator,
only the forces for each panel.) This allows us to eliminate expensive data
@@ -659,7 +659,7 @@ Apart from that, the main direction of future research is creation of on-board
intelligent system that would include Virtual testbed as an integral part for
simulating and predicting physical phenomena.
-\subsubsection*{Acknowledgments.}
+\subsubsection*{Acknowledgements.}
Research work is supported by Saint Petersburg State University (grant
no.~26520170).
diff --git a/references.bib b/references.bib
@@ -94,7 +94,7 @@
}
@Misc{ hull2010,
- title = {Analytical ship hull shape construction, wave resistance
+ title = {Analytic ship hull shape construction, wave resistance
calculations, theoretical blueprint feature curve
calculations, and ship stability diagrams (in {Russian})},
author = {Vasily Khramushin},