Utilizing real service simulation techniques to investigate distributed storage systems' latency performance based on an examination of Amazon S3

Section: Articles Published Date: 2024-02-09 Pages: 27-34 Views: 101 Downloads: 55

Authors

Download PDF
volume 07 issue 02

Abstract

Current erasure codes rely heavily on data nodes to generate the parity nodes. The greater the tolerance for error, and the more "If we can increase the number of parity nodes, we may increase our chances of restoring the original data. As the number of parity nodes grows, the storage overhead will rise, and the repair burden on data nodes will rise as well, because data nodes are queried often to help in the repair of parity nodes. If a global parity node fails in LRC [25, 26], for instance, all data nodes must be fixed. It will take more time to process read requests for data nodes as a result of the "increasing demands on the network's data nodes. An application where frequent data retrievals are unwelcome is a Google search.

In an effort to cut down on waiting time, "produces both data and parity nodes, the latter of which can take over part of the repair work normally done by the former. In other words, the number of data nodes that may be accessed remains constant, regardless of whether or not a parity node is functioning. It would appear that parity nodes incur additional storage costs. Generating parity nodes using parity nodes can assist decrease access latency without raising or decreasing the storage requirements if the architecture is sound ", as we shall show in the following sections [27, 28], above your head.

In this research, we'll compare and contrast the effectiveness of "Hierarchical Tree Structure Code (HTSC) and High Failure-tolerant Hierarchical Tree Structure Code (FH HTSC)."

Keywords

Hierarchical Tree Structure, Data Retrieval, Data Nodes