3.2 Cloud Architectures

Within the last years, the increasing presence of large-scale architectures has introduced the era of Cloud Computing.

Cloud Computing

There is a huge number of different definitions available towards Cloud Computing. The term "cloud" itself is derived from the figurative abstraction of the internet represented as a cloud. The available definitions of Cloud Computing range from reductions it to be ``the next hype term" over pragmatic definitions focusing on special aspects to definitions, that see Cloud Computing as a general paradigm shift of information architectures, as an ACM CTO Roundtable [Cre09] has shown in 2009. Of the various features that are attributed to the "cloud", scalability, a pay-per-use utility model and virtualization are the minimum common denominators.

The key concept of Cloud Computing is the resourcing of services. These services can be generally differentiated into three vertical layers:

IaaS
The provision of virtualized hardware, in most cases as virtual computing instances, is termed IaaS. The particular feature of this form of cloud service is the ability to change these instances on demand and at any time. This includes spawning new computing instances, altering existing ones by resizing or reassigning resources or shutting down unneeded instances dynamically. Basic working units for IaaS are virtual images, that are instantiated and deployed in the cloud and executed within virtual machines. They contain an operating system and additional software implementing the application. Due to complete virtualization, the physical infrastructure of the data centers of IaaS providers is totally transparent to their customers.
PaaS
PaaS provides an additional level of abstraction by offering an entire runtime environment for cloud-based applications. PaaS typically supplies a software stack of dedicated components where applications can be executed on, but also tools facilitating the development process. The platform hides all scalability efforts from the developer, thus it appears as one single system to the user, although it transparently adapts to scale. Most of the platforms realize this feature through elaborate monitoring and metering. Once an application is uploaded onto the platform, the system automatically deploys it within the nodes of the data center of the PaaS. When the application is running under heavy load, new nodes will be added automatically for execution. If demand declines, spare nodes will be detached from the application and returned into the pool of available resources.
SaaS
SaaS provides a web-centric supply of applications. Due to various technological trends, web-based applications mature and become powerful pieces of software running entirely within the browser of the user. By replacing traditional desktop applications that run locally, SaaS providers are able to publish their applications solely in the web. Cloud architectures allow them to cope with a huge number of users online. It allows users to access the applications on demand and they can be charged merely by usage without having to buy any products or licenses. This reflects the idea to consider the provision of software as a service.

Referring to the terminology of Cloud Computing, a scalable web application is essentially a SaaS that requires appropriate execution/hosting environments (PaaS and/or IaaS).

PaaS and IaaS Providers

In the following, we will consider some exemplary hosting providers and outline their service features.

Amazon Web Services

Amazon was one of the first providers for dedicated, on-demand and pay-per-use web services. It is currently dominating the multi-tenant Cloud Computing market. Their main product is Elastic Compute Cloud (EC2), a service providing different virtualized private servers. The provision of virtualized machines in scalable amounts forms an architectural basis for most of their clients. Instead of growing and maintaining own infrastructure, EC2 clients can spin up new machine instances within seconds and thus cope with varying demand. This traditional IaaS hosting is complemented by a set of other scalable services that represent typical architecture components.

Elastic Block Storage (EBS) provides block-level storage volumes for EC2 instances. Simple Storage Service (S3) is a key-value web-based storage for files. More sophisticated data storage systems available are SimpleDB, DynamoDB and Relational Database Service (RDS). The former two services represent non-relational database management systems with a limited set of features. RDS currently supports MySQL-based and Oracle-based relational databases.

Elastic Load Balancing (ELB) provides load-balancing functionalities on transport protocol level (e.g. TCP) and application level (e.g. HTTP). As a message queue, Simple Queue Service (SQS) can be used. For complex MapReduce-based computations, Amazon has introduced Elastic MapReduce.

ElastiCache is an in-memory cache system that helps speeding up web applications. CloudFront is a CDN, complementing S3 by replicating items geographically. For monitoring purposes, Amazon has come up with CloudWatch, a central real-time monitoring web service.

Besides these services, Amazon has introduced their own PaaS stack, called Elastic Beanstalk. It is essentially a bundle of existing services such as EC2, S3 and ELB and allows to deploy and scale Java-based web applications. Furthermore, there are additional services that cover business services such as accounting or billing.

Google App Engine

The Google App Engine is a PaaS environment for web applications. It currently provides support for Python, Go and several JVM-based languages such as Java. Applications are hosted and executed in data centers managed by Google. One of its main features is the automatic scaling of the application. The deployment of the application is transparent for the user, and applications encountering high load will be automatically deployed to additional machines.

Google provides free usage quotas for the App Engine, limiting traffic, bandwidth, number of internal service calls and storage size among others. After exceeding these limits, users can decide to add billable resources and are thus getting charged for additional capacities.

The runtime environment of the application is sandboxed and several language features are restricted. For instance, JVM-based applications cannot spawn new threads, use socket connections and access the local file system. Furthermore, the execution of a request must not exceed a 30 seconds limit. This restrictions are enforced by modified JVMs and altered class libraries.

Besides an application container, the App Engine provides several services and components as part of the platform. They are accessible through a set of APIs. We will now have a brief look at the current Java API.

The Datastore API is the basic storage backend of the App Engine. It is schemaless object datastore, with support for querying and atomic transaction execution. Developers can access the datastore using Java Data Objects, Java Persistence API interfaces or via a special low-level API. A dedicated API provides support for asynchronous access to the datastore. For larger objects, and especially binary resources, the App Engine provides a Blobstore API. For caching purposes, the Memcache API can be used. Either using a low-level API or the JCache interface.

The Capabilities API enables programmatical access to scheduled service downtimes and can be used to develop applications that prepare for the unavailablity of capabilities automatically. The Multitenancy API provides support for multiple separated instances of an application running in the App Engine.

Images can be manipulated using the dedicated Images API. The Mail API allows the application to send and receive emails. Similarly, the XMPP API allows message exchange based on XMPP. The Channel API can be used to establish high-level channels with clients over HTTP and then push messages to them.

The Remote API opens an App Engine application for programmable access using an external Java application. Other web resources can be accessed using the URLFetch API.

Thanks to the Task Queue API, there is some support for tasks decoupled from request handling. Requests can add tasks to a queue, and workers asynchronously execute the background work. The Users API provides several means to authenticate users, including Google Accounts and OpenID identifiers.