iccsa-15-novel-appr

Novel Approaches for Distributing Workload on Commodity Computer Systems
git clone https://git.igankevich.com/iccsa-15-novel-appr.git
Log | Files | Refs

sec-1.tex (10887B)


      1 \section{Long-lived transactions}
      2 \label{sec:trans}
      3 
      4 While performing computing in HPC environment various errors may occur. Hardware errors have the greatest impact among others. To fix this type of errors several approaches exist today which often consist of restarting the job completely. In distributed systems computational nodes have even more risk to be lost because of additional factors such as unreliable network. Thus, a complete job restart every time one or more nodes fail is ineffective.
      5 
      6 While searching solution of this problem let us refer to the traditional transaction mechanism. Designed for operations on data it uses logging and locking to prevent data corruption and loss. ACID properties --- Atomicity, Consistency, Isolation, Durability --- of the transaction apply to relational databases, but for distributed system they are implemented with several restrictions. Now let us take a look at high-performance distributed environment. Here operations are performed on the tasks and the objective is to get valid computation results. At first glance, to apply transactions for computing one needs to segment initial task code and take a subtask as an atomic operation, but this is not sufficient. 
      7 
      8 Unlike database operations, computations can take much more time to execute, and long-lived transactions, which theoretically can take as much time as needed, is the solution. The main aspect of this technology is a correct logging and further journal processing. There is no unified definition of ``long-lived transaction'', so we define this term here. 
      9 
     10 \textsl{Long-lived transaction} is a transaction operations of which are executed for long time periods and there are long gaps between completion of operations. So, it is not safe to assume fast execution time of such transactions. For them only atomicity and reliability properties are guaranteed.
     11 
     12 At the modeling stage web service can be a perfect container for computational subtask. Using REST~\cite{armstrong2003making} --- representational state transfer --- as a specific implementation of web services, we design and implement job scheduling on nodes as a call of a web service with target URL. Main accent here is on restore process of failed operation. The aim of this paper is to offer time-efficient algorithm of such restoration. Next, we will compare properties of REST realisation to ``reliable'' set of properties ACID and will draw a conclusion about meaningful changes in our model, which guarantee satisfiability of the initial task.
     13 
     14 During testing REST web services inside transaction container, the fact of inapplicability of ACID ``as is'' was revealed. These properties conflict with both REST basics and a definition of a web service. Lets consider these properties step by step.
     15 
     16 \subsection{ACID in REST}
     17 
     18 \subsubsection{Atomicity.} Atomicity guarantees that transactions will not be partially saved and in terms of data this works best. However, web services are operations which create, modify and delete data while running, so transaction involving one or more web services must be moved to a higher level of abstraction. In fact, REST web service transactional system has two levels of atomicity: database level and level where web services are called directly. First of them is provided by a particular database implementation by default, second one is on the logical level, which programmer must implement in terms of a particular algorithm. It is important to understand what web service implementation must guarantee logical atomicity of its internal operations by providing only two available states of termination: absolutely correct result of entire web service and error state result. Thus, a collection of web services on logical level can be considered as a single web service recursively applying such requirements.
     19 
     20 So, there is a need for a rollback operation for a web service, and in \cite{kochman2012batched,wilde2011rest} the authors also came to this conclusion. However, REST architecture has no mandatory requirements to system functionality implementation, and this generally leads to impossibility of automatic operation rollback in those systems. Even if rollback operations were implemented by web service developer, there would be no guarantee of valid result after calling these methods, because logical side of an algorithm can be non-trivial.
     21 
     22 \subsubsection{Durability.} Durability is an interesting property. It prevents losing state information (even from hardware failure) by logging all actions in special journal. There are some challenges to implement an efficient distributed journal, but for now there are a few related articles where logging REST web services was partially described~\cite{kochman2012batched,wilde2011rest}. In our approach to achieve efficient failed-task restoration distributed logging system plays an important role, and implementation of this property meets REST requirements.
     23 
     24 \subsubsection{Isolation.} In REST this property is difficult to achieve without making an additional proxy server objects. Those objects serve as queues filtering ``transactional'' web services and executing them sequentially. This method is described in more detail in~\cite{kochman2012batched}. But if web service is designed to use parallel operations, a proxy server can be a performance bottleneck. In this case, isolation must be implemented on web service level, not on abstract level of a collection of web services. Transaction isolation through web service isolation imposes additional requirements to web service developer, but efficiency of transactional system may suffer without it. Isolation of a collection of transactions is applied by transactional job scheduler by default, because it executes transactions sequentially from a queue. 
     25 
     26 \subsubsection{Consistency.} Much like isolation, consistency is difficult to guarantee on web service level. It is a property of database rather than a web service.
     27 
     28 %Web services are distributed systems and there is well-known CAP theorem, so, no consistency property for web services, because in our case they must be highly available and be tolerant to failures (and use rollback).
     29 
     30 \subsection{Implementation}
     31 
     32 Main feature of our transactional manager is a special initial task code structure for long-lived transactions. Implementation consist of a server which executes transactions sequentially by placing them to a queue, logging and collecting responses, and a rewritten client code, which uses special functions (we called it {\it act} and {\it react}) to divide a task into a set of subtasks. On server's input we have structured code of a subtask, transferred in JSON format, for example. Each subtask is an autonomous part of code. Initial task after rewriting by this algorithm is represented by a tree, which itself is a transaction and leaves are operations. Thus, sequence of operations are saved: sequential operations have parent-child relation, and parallel have child-child relation.
     33 
     34 This approach is largely different from those described in~\cite{kochman2012batched,wilde2011rest}, it is not focused only on web service calls. In real world applications, reliable summary result is what user wants, without middleware web services calls information. But those middleware operations must be processed in any case. Only simplistic algorithms use exclusively web service calls, often a call is a result of subtask processing from another web service. Dividing task to a group of subtasks and after that formatting a transaction allows processing not only invokes of web services. In fact, any part of initial task can be secure. 
     35 
     36 In REST web services there is no universal way to achieve general atomicity. Rollback is implemented by moving the tree backward from a leave with failed state. Rollback performs for each subtask separately, not affecting other subtask on that particular computational node. In case of a node hardware failure, first rollback will wait the node to come online for some time, to not to move a task to another node's queue. In case of impossibility of that scenario, rollback function goes one level up and tries the same approach. Code segmentation can improve restoration time significantly, but also prevent legacy application to run in such environment.  
     37 
     38 The ineffectiveness of running rollback straight away on higher levels (entire subtree) shown in~\cite{armstrong2003making}. As previously mentioned, rollback function is empty by default and must be written by a programmer of a web service himself. This step is a necessary and logical, because correct rollback for each function must be written by a person who exactly knows in which state abstract transaction will be after failed operation and what that operation was doing before stop. Transactional system described in this paper is a tool, not an universal solution for all types of operations because each computational algorithm is unique. Use of this tool  requires rethinking of original algorithm in terms of partitioning to autonomous segments. 
     39 
     40 \subsection{Building the transaction and early results}
     41 There is a step-by-step algorithm of transaction manager.
     42 \begin{enumerate}
     43     \item Programmer select self-sufficient parts of original algorithm -- marking it as transaction operations.
     44     \item Programmer writes a rollback function for all of such parts.
     45     \item Structured code is sent to transaction server. 
     46     \item Server executes transaction by placing a root node (which includes a whole algorithm) to queue and then starts moving down across the code tree.
     47     \item If processing is done without failures, iteration reaches leaves and starts going back by transferring successful results from children to their parents.
     48     \item If processing is done with failures, transaction will run a rollback function on failed nodes and try to prevent massive rollback by slowly moving up the tree by one level at time.
     49 \end{enumerate}
     50 
     51 The system was tested on 3 level algorithm tree which produces 25 lines in journal until it is completed. Measured time indicates how long transaction manager works to complete the task if a rollback function was called at specific log line (Fig.~\ref{fig:trans}). Computation time without any rollback calls shown as dotted line. Time peaks in Fig.~\ref{fig:trans} belong to lines in journal that designate ``execution'' step of one or more subtasks. Total overhead oscillate from 5\% to 15\% because of fast prototype implementation on Node.js technology. The task itself consists of simple integration. Web services as task was tested as well, to investigate properties of long-lived transactions that are described above.
     52 \begin{figure}
     53     \centering
     54     \includegraphics[width=0.77\textwidth]{trans}
     55     \caption{Rollback time of a subtask at different log lines.}
     56     \label{fig:trans}
     57 \end{figure}