Skip to main content
— Presentation at IEEE INFOCOM 2014, CrossCloud —
June 4, 2014
Fig.1 Concept of wide area distributed cloud
Yokohama Research Laboratory presented distributed cloud computing where response of applications can be improved by processing and storing data at data centers near terminals (Fig. 1), at the workshop of IEEE INFOCOM 2014 held in Canada during 26th April and 2nd May.
Business applications (e.g., CRM and ERP) in private cloud computing systems are accessed from all over the world owing to modern-day globalization. This makes it difficult for the cloud to deliver a good response time in fact, the response time can be hundreds or even thousands of milliseconds if terminals access the applications from a great distance. This exceeds the maximum response time for end users to use applications without deteriorated usability. Contents delivery networks reduce the response time of static (i.e., read-only) Web content, but it does not work for dynamic (i.e., read/write) content such as business applications.
To lower the response time, applications and data should be replicated at datacenters (DCs) in the same geographical regions as the terminals by federating multiple clouds. Most terminals are globally distributed, while most DC locations provided by a cloud servicer are typically limited to a few regions. To overcome this limitation, federation or interconnection of multiple clouds is a promising technology that enables the utilization of the multiple clouds including public clouds. Common standards and policies to provide a universal environment for clouds have recently been formulated.
The open issue in today's operation of private clouds, which is not trivial, involves improving the response time when utilizing multiple DCs, including those of public clouds. For one thing, system analysis and planning for placing applications and related data is often manual work. Moreover, measuring the response time in test environment and feedback results is required because theoretical work is not precise enough to guarantee the stringent service level agreements (SLAs) required by the business applications. Such work is one of the principle factors that make the management of private cloud structures so expensive.
Conventional studies related to placement based on analysis promise to alleviate the analysis and planning workload. However, manual work related to measurement and feedback in test environments remains, even if these studies are successfully adapted in the operation.
The conventional studies are based on mathematical formulae such as a linear integer programming. For example, placement of applications together with their data components in the cloud has been studied in. Cloud platform requirements are specified in terms of processing capacity, memory, and storage. Similar requirements are formulated into mixed integer programs in to find an appropriate mapping within DCs. Various other approaches have also attempted to solve multi-fold placement constraints on CPU, memory, network bandwidth, and energy. Placement problems to minimize end-to-end response time (i.e., user-perceived latency) have been formulated.
Fig.2 Overview of Wide Area Tentative Scaling
These approaches are based on mathematical formulae and do not work well if the input parameters of the model are inaccurate, and it is no trivial matter to precisely understand these parameters (e.g., performance and communication delay) in the federated cloud. For example, performance in most public clouds is not guaranteed and varies depending on other applications, and communication delay between DCs varies depending on the behaviors of applications. Moreover, enormous analysis is required to understand all of the behaviors and reference relationships between the components of multi-tier applications as well as the data consistency between components in a large-scale system. As a result, even after conventional research is applied to the operation of private clouds, the manual measurement and feedback in the test environment is required to ensure sufficient performance.
In this presentation, we proposed wide area tentative scaling (WATS) to repetitively change the placement organization of a part of applications and related data so that it can reduce response time in a phased manner (Fig. 2). The conventional approaches focus on improving analytical precision by considering various parameters. In contrast, WATS focuses on coping with analytical errors by tentatively measuring change organization, thereby directly tackling the difficulty of precise estimation. The drawback of this approach is that it consumes more computing resources due to repeated replication. We therefore, applied Bayesian inference to search for a better organization with a fewer trials.
Our objective is to reduce the response time by placing applications and data at geographically distributed DCs without incurring additional operation load such as measurements in the test environment. This paper offers three primary contributions: (1) an approach to improving response time in a phased manner by repetitively and tentatively replicating applications and their data at DCs, (2) a Bayesian inference algorithm to search better organization with a fewer trials (i.e., tentative organization change), (3) evaluations of the response time and the computing cost.
(By YABUSAKI Hitoshi)