The similarities and difficulties between Cloud computing and grid computing

Firstly let talk about Amazon Web Services:
Customers can create their own Amazon Machine Images (AMI) containing an operating system, applications, and data, and they control how many instances of each AMI run at any given time. Customers pay for the instance-hours (and bandwidth) they use, adding computing resources at peak times and removing them when they are no longer needed.
Amazon provides five different types of servers ranging from simple-core x86 servers to eight-core x84_64 servers. Customers do not have to know which servers are in use to deliver service instances. They can place the instances in different geographical locations or availability zones. Amazon allows elastic IP addresses that can be dynamically allocated to instances.

Cloud computing
With CC, companies can scale up to massive capacities in an instant without having to invest in new infrastructure, train new personnel, or license new software. CC is of particular benefit to small and medium-sized businesses who wish to completely outsource their data-center infrastructure, or large companies who wish to get peak load capacity without incurring the higher cost of building larger data centers internally. In both instances, services consumers use what they need on the Internet and pay only for what they use.

Grid computing
CC evolves from grid computing and provides on-demand resource provisioning. Grid computing may or may not be in the cloud depending on what type of users are using it. If the users are systems admin and integrators, they care how things are maintained in the cloud. They upgrade, install, and virtualize servers and apps. If the users are consumers, they donot care how things are run in the system.

Similarities and differences
They are both scalable. Scalability is accomplished through load balancing of application instances running separately on a variety of operating systems and connected  through Web services.

Both of them involve multitenancy and multitask, meaning that many customers can perform diff tasks, accessing a single or multiple apps instances. Sharing resources among a large pool of users assists in reducing infrastructure costs and peak load capacity.

The Amazon provides a Web services interface for the storage and retrieval of data in the cloud. Setting a maximum limits the number of objects you can store. You can store obj as small as 1 byte and as large as 5GB or even several terabytes. While the storage computing in the grid is well suited for data-intensive storage, it is not economically suited for storing abjects as small as 1 byte. In a data grid, the amounts of distributed data must be large for maximum benefit.

Tops Cloud Computing

Cloud computing is still something new for me.

I googlizer with the key word “cloud computing prediction”, and I found some useful information:

Appirio’s 2009 predictions about CC:

1) Cloud of clouds

The early seeds of a “cloud of clouds” emerging. It started in April with Salesforce and Google announcing integration between Google Apps and Salesforce to bridge the gap between Google’s productivity applications and Salesforce . Later in the year, at Dreamforce, Salesforce expanded the idea of a “cloud of clouds” by announcing integrations with Facebook (for social graph information) and with Amazon (for raw computing infrastructure). Salesforce ended the year with a bang, by announcing Force.com for Google App Engine. In a period of 12 months, Salesforce laid the seeds of a “cloud of clouds” bringing together the strengths of multiple, complementary, on-demand platforms to create a “virtual platform” for the industry.

What is unique about the “cloud of clouds” is the ability to connect realms of software that have never been connected in the past, e.g., business applications, collaboration applications and social applications. This enables increases in both efficiency (through improved productivity) and effectiveness (through insight and new connections between information

2) Microsoft Azure – a platform for building application in cloud

3) Google apps

4) SaaS 1.0 company -> SaaS 2.0 company

7

The diagram illustrates the key adoption levels and differences between SaaS 1.0 and SaaS 2.0.

Until recently, SaaS was focused on providing a method for cost-effective software application delivery. The next generation of SaaS, SaaS 2.0, will change this and will provide a fully integrated business service provisioning platform. SaaS 2.0 will provide the perfect, low cost, platform through which companies can engage with their business partners. A recent study from Saugatuck Technology, entitled “SaaS 2.0 – Software as a Service as a Next-Gen Business Platform,” shows that SaaS is at a fundamental tipping point from simply being a service providing applications, to one which provides applications, and is fully integrated to a company’s B2B infrastructure.

SaaS 2.0-based environments will encompass the following key attributes:

  • Secure, flexible and efficient business processes and workflows
    The key business drivers for SaaS 2.0 will be about helping users transform their business structures and processes. SaaS 2.0 could even be considered as a platform for Business Service Provisioning
  • Service level agreements
    SaaS 2.0 will provide a much more robust infrastructure and application platform that will need to be driven by Service Level Agreements (SLAs) to ensure high service availability
  • Rapid achievement of business objectives
    The intention of SaaS 2.0 is to enable companies to reach business objectives in a much shorter time frame. For example, SaaS 2.0 facilitates the rapid integration of trading partners into a company’s B2B infrastructure
  • Provide value-added business services
    Vendors and service providers will differentiate themselves by offering a range of value- add business service “plug-ins”. In this way, SaaS 2.0 will offer companies a mix of business process, application functionality and managed services at an operational level. SaaS Integration Platforms (SIPs) will emerge as vendors; consultancies and VARs will learn how to bundle and deliver these critical, value-add capabilities
  • Business impact via SaaS “Network Effect”
    Saugatuck defines the “network effect” of SaaS 2.0 as a cascading and radiating impact of business improvement and change within, across and beyond the user enterprise. In essence, a SaaS 2.0 deployment will increase and improve choices, efficiencies and business capabilities within the user enterprise, and between the enterprise and its suppliers, customers and business partners
  • Low-cost “white label” vertical solutions
    The business, technological flexibility and managed services aspects of SaaS 2.0 will see the emergence of a number of vertical industry-focused applications for use by SMBs. Many of these smaller companies have often eluded the enterprise software vendors, but the low cost, ease of deployment and service availability will see SaaS 2.0-based applications being adopted much more quickly among the SMB community
  • SaaS integration platforms provide application sharing, delivery and management services
    As users add SaaS applications over time, SIPs will play a critical role as “solution hubs” that provide integration, delivery and management services

In summary SaaS 2.0 goes well beyond today’s SaaS business drivers which have so far focused on cost-effective software deployment. SaaS 2.0 is more focused on helping users transform their business workflow and processes, and ultimately the way in which they do business.

5) A rise in serverless companies with 1000+ employees

6) The rise and fall of the private cloud.

7) Business Intelligence (BI) becomes the next functional area to SaaSify, such as CRM and HRM

Cloud computing, architectural considerations for IaaS

Today’s trend:

  • high-performance computing
  • database management systems
  • Cpu-intensive processing
  • data-intensive processing

the goals:

  • Scalability
  • Availability
  • Reliability
  • Security
  • Flexibility and agility
  • Serviceability
  • Effciency

CC raises the level of abstraction so that all componenets are abstracted or virtualized, and can be used to quickly composes higher-level applications or platforms. If a component does not provide a consistent and stable absraction layer to its clients or peers, it’s not appropriate for cloud computing.

In cloud computing, it’s important to maintain the model, not the image itself. The models is maintained, the image is produced from the model.

Virtual machine images will alwats change because the layers of software within them will always need to be patched, upgraded, or reconfigured. What doesn;t change is the process of creating the virtual machine image, and this is what the developers should focus on.

Standards help to address complexity

CC emphasized efficiency above all, so adopting a small of standards and standard configurations helps to reduce maintenance and deployment costs. Having standards that make deployment easy is more iomportant than having the perfect environment for the job. The rule comes to play here: “cloud computing focuses on the few standards that can suport 80% of the use cases.”

For an enterprise shifting to cloud computing, standards may inlclude the type of virtual machine, the operating system in standard virtual machine images, tools, and programming languages supported:

  • Virtual machine types: consider the impact of virtual machine choice on the application to be supported. For a social networking application, isolation for security, and a high level of abstraction for portability, would suggest using Type II VMS. For a high-performance computing or visualization applications, the need to access hardware directly to achieve the utmost performance would suggest using Type II VMs. (The software layer providing the virtualization is called a virtual machine monitor or hypervisor. A hypervisor can run on bare hardware (Type 1 or native VM) or on top of an operating system (Type 2 or hosted VM)).
  • Preinstalled, Preconfigured systems: The software on VMs must be maintained just as does on a physical server. OSs still need to be hardeded, patched, and upgraded.
  • Tools & languages: Enterprises might standardize on using java programming language and Ruby on Rails; small businesses might standardize on PHP…just remember platform as a service.

Virtualization and encapsulation supports refactoring

4A development pattern can be encapsulated for re-use. In this example a pattern specifies Web, application and database server tiers, and all that needed for it to deploy an instance of itself is pointered to VMs for each of the three layers.

Loose-coupled, stateless, fail-in-place computing

For years, web applications have been moving toward being loose-coupled and stateless. In CC, this characteristics are even more imporant because of CC’s even more  dynamic nature. Coupling between application components needs to be loose so that a failure of any component does not affect overall application availability. A component should be able to “fail in
place” with little or no impact on the application.

Horizontal scaling

CC makes a massive amount of horizontal scalability avaiable to applications that can take advantage of it. The trend toward designing and refactoring applications to work well in horizontally scaled environments means that
an increasing number of applications are well suited to cloud computing.

The combination of stateless and loose-coupled application components with horizontal scaling promotes a fail-in-place strategy that does not depend on the reliability of any one component.

Parallelization

In a physical world, parallelization is often implemented with load balancers or content switches that distribute ncoming requests across a number of servers. In a cloud computing world, parallelization can be implemented with a load balancing appliance or a content switch that distributes incoming requests across a number of  virtual machines. In both cases, applications can be designed to recruit additional resources to accommodate workload spikes.

5Security and data physics

  • Encrypt data so that if any intruder is able to penetrate a cloud provider’s security, or if a configuration error makes that data accessible to unauthorized parties, that the data cannot be interpreted.
  • Encrypt data transit
  • Strong authenticaton –-> data is transmitted only to known parties
  • Pay attention to cryptography and how algorithms are cracked and are replaced by new ones over time.

Network security practices

There are some approches:

  • Use security domains to group virtual machines together, and then control access to the domain through the cloud provider’s port filtering capabilities.

 Cloud providers should offer mechanisms, such as security domains, to  secure a group of virtual machines and control traffc fow in and out of the group

Cloud providers should offer mechanisms, such as security domains, to secure a group of virtual machines and control traffc fow in and out of the group

  • Control traffic using the cloud provider’s port-based fltering, or utilize more stateful packet fltering by interposing content switches or firewall appliances where appropriate.

Overview of Cloud Computing Architecture (Sun’s view)

Sun believe that Cloud Computing (CC) is the next generation of network computing.

So, what is distinguish CC from previous models?

  • It’s using information technology as a service over the network.
  • We define it as a services that are encapsulated, have API, and available over the network
  • It use both compute and storage resources as services

–> The predominant model for CC today is infrastructure as service or IaaS.

The Nature of Cloud Computing

CC incorporates virtualization, on-demand deployment, internet delivery as services and open source software. From one perspective, CC uses concepts, approaches and best practices that have already been establish; therefore, CC is nothing new. From another perspective, everything is new because CC changes the way we invent, develop, deploy, scale, update, maintain and pay for applications.

The on-demand, self-service, pay-by-use nature of CC is also an established trends.Virtualization is a key feature.
Services are delivered over the network. No mater where they are, no mater who they are; if they are authorized as employees, partners, suppliers, and consultants, applications can be made available anywhere, and at any time.

Cloud computing infrastructure models

There are 3 models that offer complementary benefits: public, private and hybrid clouds.These terms do not dictate location. While clouds are typically “out there” on Internet and private clouds are typically located on premises.

Public clouds provides services to multiple customers,a and is typically deployed at a colocation facility.

1

Private clouds may be hosted at a colocation facility or in an enterprise data center. They may be supported by the company, by a cloud provider, or by a third party such as an outsourcing firm.

2

Hybrid clouds combine both public and private cloud models, and they can be particularly effective when both types of cloud are located in the same facility

3

Architectural layers of cloud computing

Sun’s view of CC is using IT infrastructure as a service – and that service may be anything from renting raw hardware to using third-party APIs. In practice, services can be grouped into three categories: software as a service, platform as a service, and infrastructure as a service.

Software as a service (SaaS) features a complete application offered as a service on demand. A single instance of the software runs on the cloud and services multiple end users or client organizations.  Salesforce.com, Google Doc… are examples of SaaS.

Platform as a service (PaaS) encapsulates a layer of software and provides it as a service
that can be used to build higher-level services. There are at least two perspectives on
PaaS depending on the perspective of the producer or consumer of the services:

  • Someone producing PaaS might produce a platform by integrating an OS,
    middleware, application software, and even a development environment that is
    then provided to a customer as a service. For example, Netbeans, an  integrated development environment, a Sun GlassFishTM Web stack and support for additional programming languages such as Perl or Ruby.
  • Someone using PaaS would see an encapsulated service that is presented to them
    through an API.

Commercial examples of PaaS include the Google Apps Engine, which serves applications on Google’s infrastructure.

Infrastructure as a service (IaaS) delivers basic storage and compute capabilities as
standardized services over the network. Servers, storage systems, switches, routers,
and other systems are pooled and made available to handle workloads that range
from application components to high-performance computing applications. Commercial examples of IaaS include Joyent, whose main product is a line of virtualized servers that provide a highly available on-demand infrastructure.

The benefits of CC is so big, such as reduce runtime and response time, minimize infrastructure risk, Lower cost, and increase pace of innovation. They are almost the same to the benefits of Grid Computing,  the field I am studying.

From above, the predominant model for CC today is infrastructure as service or IaaS. So the next topic should be “Cloud computing, architecture of IaaS”.

HienTT