CLOUD APPLICATION PROGRMMING AND THE ANEKA
PLATFORM
❖ What is meant by Aneka platform?
- Aneka is a cloud computing PaaS software platform for
developing applications. Aneka comes with a set of
extensible APIs for programming models like
MapReduce. These APIs support different cloud models
like a private, public, or hybrid Cloud
- What is Aneka in Cloud Computing?
- Aneka is a cloud application platform. It provides a set
of tools and services for developing, deploying, and
managing cloud applications. It was developed by the
Distributed Systems and Networking (DSN) Laboratory
at the University of Melbourne, Australia.
- What are the advantages of Aneka in Cloud
Computing?
- Aneka provides several advantages for developers and
users in cloud computing. It provides a scalable
infrastructure, enables cost savings, provides a flexible
platform, helps to reduce complexity, and provides a
high-availability infrastructure.
- What are the features of Aneka?
- Aneka provides various features for developing,
deploying, and managing cloud applications. Aneka
provides multi-tenancy and allows multiple users to
share the same infrastructure, supports virtualization,
and can be integrated with a wide range of cloud
providers and services.
- What are the types of Aneka?
- Aneka is a cloud application platform that comes in two
different versions that are Aneka enterprise, and Aneka
cloud. Aneka enterprise is designed for enterprise-level
applications. Aneka cloud is designed for small to
medium-sized applications.
❖ ANEKA CLOUD APPLICATION PLATFORM-
FRAMEWORK OVERVIEW
• Aneka is a software platform for developing cloud
computing applications.
• It allows harnessing of disparate computing
resources and managing them into a unique virtual
domain—the Aneka Cloud—in which applications are
executed. According to the Cloud Computing
Reference Model presented in Chapter 1, Aneka is a
pure PaaS solution for cloud computing.
• Aneka is a cloud mid-dleware product that can be
deployed on a heterogeneous set of resources: a
network of computers, a multicore server, data
centres, virtual cloud infrastructures, or a mixture of
these.
• The framework provides both middleware for
managing and scaling distributed applications and an
extensible set of APIs for developing them.
• Figure 5.2 provides a complete overview of the
components of the Aneka framework.
• The core infrastructure of the system provides a
uniform layer that allows the framework to be
deployed over different platforms and operating
systems.
• The physical and virtual resources representing the
bare metal of the cloud are managed by the Aneka
container, which is installed on each node and
constitutes the basic building block of the
middleware.
• A collection of interconnected containers constitutes
the Aneka Cloud: a single domain in which services
are made available to users and developers.
• Figure 5.2 provides a complete overview of the
components of the framework.
• The core infrastructure of the system provides a
uniform layer allowing the framework to be deployed
over different platform and operating system.
• The physical and virtual resource representing the
bare metal of the cloud are managed by the Aneka
container, which is installed on each node and
constitutes the basic building block of the
middleware.
• A collection of interconnected containers constitutes
the Aneka Cloud: a single domain in which services
are made available to users and developers.
• The container features three different classes of
services:
• - Fabric Services
• - Foundation Services
• - Execution Services.
• These respectively take care of infrastructure
management, supporting services for the cloud and
application management and execution.
• These services are made available to developers and
administrators by the means of the application
management and development layer, which includes
interfaces and APIs for developing Cloud
applications, and the management tools and
interfaces for controlling Aneka Clouds
• Aneka implements a Service-Oriented
Architecture(SOA), and services are the fundamental
components of an Aneka Cloud.
• Services operate at container level and except for the
platform abstraction layer- they provide developers,
users and administrators with all features offered by
the framework.
• Services also constitute the extension and
customization point of Aneka Clouds: the
infrastructure allows for the integration of new
services or replacement of the existing ones with a
different implementation.
• The framework includes the basic services for
infrastructure and node management, application
execution, accounting, and system monitoring;
existing services can be extended and new features
can be added to the cloud by dynamically plugging
new ones into the container.
• Such extensible and flexible infrastructure enables
Aneka Clouds to support different programming and
execution models for applications.
• A programming model represents a collection of
abstractions that developers can use to express
distributed applications; the runtime support for a
programming model is constituted by a collection of
execution and foundation services interacting
together to carry out application execution.
• Within a cloud environment, there are different
aspects involved in providing a scalable and elastic
infrastructure distributed runtime for application.
• These services involve the following:
a) Elasticity and Scaling
b) Runtime Management
c) Resource management
d) Application Management
e) User Management
f) QoS/SLA Management and Billing
a) Elasticity and Scaling:
• With its dynamic provisioning service. Aneka
supports dynamic up-sizing and down-sizing
of the infrastructure available for applications.
b) Runtime Management:
• The runtime machinery is responsible for
keeping the infrastructure up and running,
and serves as a hosting environment for
services.
• It is primarily represented by the container
and a collection of services managing
service membership and lookup,
infrastructure maintenance and profiling.
c) Resource Management:
• Aneka is an elastic infrastructure where
resources are added and removed
dynamically, according to the application
needs and user requirements.
• In order to provide QoS based execution, the
system not only allows dynamic provisioning,
but also provides capabilities for reserving
nodes for exclusive use by specific
application.
d) QoS/SLA Management and Billing:
• QoS/SLA is a cloud environment application
execution is metered and billed.
• Aneka provides a collection of services that
coordinate together of taking into account the
usage of resources by each application and
billing the owing user accordingly
❖ Anatomy of the ankea container
• The Aneka container the building block of Aneka
cloud and represents the runtime machinery
available to services and applications.
• The container is the unit of deployment in Aneka
Clouds and it is lightweight software layer designed
to host services and interact with the underlying
operating system and hardware.
• The main role of the container is to provide a
lightweight environment where to deploy services
and some basic services such as communication
channels for interaction with other node in the Aneka
Cloud.
• The services installed in the Aneka container can be
classified inti three major categories
• Fabric Services
• Foundation Services
• Application Services
❖ Platform Abstraction Layer:-
- The platform Abstraction Layer(PAL) addresses this
heterogeneity and provides the container with
uniform interface for accessing the relevant hardware
and operating system information, thus allowing the
rest of the container to run unmodified on any
platform supported
- The PAL is responsible for detecting the supported
hosting environment and providing the corresponding
to interact with it for supporting the activity of the
container.
- It provides the following features:
- Uniform and platform- independent implementation
interface for accessing the hosting platform.
- Uniform access to extended and additional properties
of the hosting platform.
- Uniform and platform independent access to remote
nodes
- Uniform and platform independent management
interfaces.
- The PAL is a small layer of software comprising a
detection engine which atomically configures the
container at boot time with the platform-specific
component to access the above information, and an
implementation of the abstraction layer for the
Windows, Linux, and Mac OS X operating system.
- The collectible data that are exposed by the PAL are
the following
- Number of cores, frequency, and CPU usage
- Memory size and usage
- Aggregate available disk space.
- Network addresses and devices attached to the node
- The PAL interface provides means for custom
implementations to pull additional information by
using name-value pairs that can host any kind of
information about the hosting platform.
- As an example these properties can contain the
additional information about the processor such as
the model, family or additional data about the
process running the container.
• Fabric Services:- Fabric services defines the lowest
level of the software stack representing the Aneka
Container.
• Fabric service provide access to the resource
provisioning subsystem and to the monitoring
facilities implemented in Aneka.
• There two types of services installed in container
they are
• Profiling and Monitoring
• Resource Management
• Profiling and Monitoring: -
• Profiling and monitoring services are mostly through
the Heartbeat, Monitoring and Reporting services.
• The first makes available the information that is
collected through the PAL, while the other two
implement a generic infrastructure for monitoring the
activity of any service in the Aneka Cloud.
• Any service wanting to publish monitoring data can
leverage the local monitoring service without knowing
the details the entire infrastructure.
• Currently several built-in services provide information
through this channel:
• The membership catalogue tracks the performance
information of nodes.
• The execution service monitors several time intervals
for the execution of jobs
• The scheduling service tracks the state transitions of
jobs
• The storage service monitors and makes available
the information about data transfer, such as upload
and download times, file names and sizes
• The resource provisioning service tracks the
provisioning and lifetime information of virtual nodes.
• Resource Management:-
- Resource management is another fundamental
feature of Aneka Clouds.
- It comprises several tasks:
- Resource Membership
- Resource Reservation
- Managing Resource.
- These are: Index service or Membership catalogue,
Reservation Service and Resource Provisioning
Service
- The membership catalogue is the fundamental
component for resource management since it keeps
track of the basic node information for all nodes that
are connected or disconnected.
- Resource provisioning is a feature designed to
support Quality of Service(QoS) requirements driven
execution of applications. Therefore it mostly serves
request coming from the reservation service or the
scheduling services.
Foundation Services
- Fabric services are fundamental services of the
Aneka cloud and define the basic infrastructure
management features of the system.
- Foundation services are related to the logical
management of the distributed system built on the
top of infrastructure, and provide supporting services
for the execution of distributed application
- All the supported programming models can integrate
with and leverage these services in order to provide
advances in order to provide advanced and
comprehensive application management.
- These services cover
a) Storage Management for application
b) Accounting, billing and resource pricing
c) Resource Reservation
a) Storage Management
- The management of data is an important aspect in
any distributed system, even in computing clouds.
- Application operate on data, which are mostly
persisted and moved in the format of files.
- Hence any infrastructure supporting the execution of
distributed applications needs to provide facilities for
file/data transfer management and persistent
storage.
- Aneka offers two different facilities for storage
management they are
- A centralized file storage
- Distributed File system
- The model proposed by the Google File system
provides an optimized support for a specific class of
application that expose the following characteristics
- Files are huge by traditional standards
- Files are modified by appending new data rather than
rewriting existing data
- These are two kind of major workloads: large
streaming reads and small random reads
- It is more important to have a sustained bandwidth
rather than a low latency.
2. Accounting, Billing and Resource Pricing
- Accounting keeps the track of the status of application in
Aneka Cloud
- The collected information provides a detailed breakdown of
the usage of the distributed infrastructure, and it is vital for the
proper management of resources.
- The information collected for accounting is primarily related to
the usage of the infrastructure and the execution of application
-Billing is another important feature of accounting.
- Aneka is a multi-tenant cloud programming platform where the
execution of applications can involve provisioning additional
resources from commercial Iaas providers.
3) Resource Reservation
-Resource reservation supports the execution of distributed
applications and allows for reserving resource for exclusive use
by specific application.
- Resource reservation is built out of two different kinds of
services
- Resource Reservation
-Allocation service
The former keeps track of all the reserved time slots in the Aneka Cloud and unified view of the system, while the latter is installed on each node featuring execution service and manages the database of information regarding the allocated slots on the local node. Application that need to complete within a given deadline can make a reservation request for a specific number of nodes in a given time frame Different protocol and strategies are integrated in a complete transparent manner and Aneka provides extensible API for supporting advance services they are.
a) Basic Reservation: - It is features the basic capability of reserving execution slots on nodes and implements the alternate offers protocol, which provides alternative options in case the initial reservation request cannot be satisfied.
b) Libra Reservation: - It represents a variation of the previous implementation that features the ability of pricing nodes differently according to their hardware capabilities.
c) Relay Reservation: It constitutes a very thin implementation that allows the resource broker to reserve nodes in Aneka clouds and control the logic with these nodes are reserved. This implementation is useful in integration scenario where Aneka operates in an intrer- cloud environment.
3) Application services
Application services manages the execution of applications and constitute a layer that differentiates according to the specific programming model used for developing distributed
application on top of Aneka. The type and the number of services the composer layer for each of the programming models may vary according to the specific needs or features of the selected mode. It is possible to identify two major types of activities that are common across all the supported models - Scheduling and execution.
1. Scheduling: - Scheduling are in charge of planning the execution of distributed application on top of Aneka, and governing the allocation of jobs composing an application to nodes. common tasks that are performed by the scheduling component are the following
- Job- to –node mapping
- Rescheduling of failed jobs
- Job status monitoring
- Application status monitoring.
2. Execution:-
Execution service control the execution of single jobs that compose application. Application services constitute the runtime support of programming model in the Aneka cloud. Currently these are several supported models:
1Task Model:
This model provides the support for independent bag of task applications and many task computing. In this model, an application is modeled a collection of tasks that are independent from each other and whose execution can be sequenced in any order.
2 Thread Model: This model provides an extension to the classical multi-threaded programming to a distributed infrastructure and uses the abstraction of Thread to wrap a method that is executed remotely.
3. MapReduce Model:-This is an implementation of Map Reduce as proposed by Google on top of Aneka.
4. Parameter Sweep Model: This model is specialization of task model for applications that can be described by template task whose instances are created by generated different combination of parameters, which identify a specific point into domain of interest.