SlideShare a Scribd company logo
Cloud Computing
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
What is cloud computing ?
Cloud computing is the delivery of computing services over internet.
Cloud Characteristics
● Cloud systems automatically control and optimize the resources used.
● It Leverage a metering capability at the level of abstraction ( such as compute engine, RAM,
Storage as well as bandwidth)
● Central monitoring feasibility for both the consumer and provider.
● Capabilities can be elastically provisioned
Common Cloud Characteristics
● Massive Scaling
● Geographic Distribution
● Virtualization
● Service orientation
● Low cost
● Enhanced security
Cloud Computing storage saas iaas paas.pptx
Software as a Service (SaaS)
● The software applications deployed on cloud and provided to client. Here the client does
not manage or control the underlying cloud infrastructure such as network, servers,
operating systems, storage etc. Example: Google Doc,
Platform as a service (PaaS)
● Applications deployed on cloud and avail by the users from multiple devices of various
types.
Cloud Infrastructure as a Service (IaaS)
● Provision of processing, storage, network and other fundamental computing resources.
● Consumers can deploy and run arbitrary softwares.
Cloud Computing storage saas iaas paas.pptx
https://www.youtube.com/watch?v=NzZXz3fJf6o&list=PL-FqPEn1dZJDg-6LHNYnappA6DcXz3ieZ&index=2
Cloud Deployment model
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
Cloud Computing architectures
https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=20298
Reference Architecture
● Basis for documentation, project communication • Stakeholder and team
communication
● Payment, contract, and cost models
Technical Architecture
● Structuring according to XaaS Stack
● Adopting Cloud Platform paradigms
● Structuring cloud services and cloud components
● Showing relationships and external endpoints
● Middleware and communication
● Management and security
Deployment Operation Architecture
● Geo-location check (Legal issues, export control) • Operation and monitoring
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
Coutinho, E.F., de Carvalho Sousa, F.R., Rego, P.A.L. et al. Elasticity in cloud computing: a survey. Ann. Telecommun. 70, 289–309 (2015). https://doi.org/10.1007/s12243-014-0450-7
Cloud Computing storage saas iaas paas.pptx
Coutinho, Emanuel & Gomes, Danielo & Souza, Jose. (2015). An Autonomic Computing-based Architecture for Cloud Computing Elasticity.
10.1109/lanoms.2015.7332681.
Cloud Computing storage saas iaas paas.pptx
BENEFITS OF ELASTIC COMPUTING
Elasticity in the cloud has brought a turnaround in business storage. It has innumerable benefits to every
business that can be summarized as follows:
• Simple scalability and high performance: Any kind of infrastructure and services required by the
business organization are quickly provided with the assistance of computing services. Scalability being
the core feature of cloud deployments, the performance is enhanced and excellent speed for
computations is ensured.
• Cost-efficient: With elastic computing in hand, the cost for the organizations are reduced drastically as
there is no need for capital infrastructure for IT as well the payment is done only for the usage.
• Greater redundancy: The opportunity for better flexibility, reliability, affordability, and recovery
solutions is assured.
• More capacity: Unlimited storage capacity is available for business organizations with elastic cloud
computing. Being virtual it can be accessed from anywhere anytime across the network.
• High availability: The access of files has been simple and available all the time with Cloud services. Also,
view and modify options are available, The system breakdown is negligible with alternative backup.
• Easier management: The era of maintaining, upgrading, and deploying IT infrastructure has become a
past and the IT teams are relieved.
• Environment friendly: Cloud is highly environment friendly as it has lesser consumption of resources.
https://www.jigsawacademy.com/blogs/cloud-computing/elastic-computing
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
Unit 2. Virtualization
Virtualization and Physical computation resources
● What is virtualization ?
Creation of a virtual machine over existing operating system and hardware is known as
Hardware Virtualization. “Virtualization” that refers to the process of making a “virtual version”
of hardware or software, infrastructure, devices and computing resources.
Virtualization enables users to disjoint operating systems from the underlying hardware, i.e,
users can run multiple operating systems such as Windows, Linux, on a single physical
machine at the same time
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
The bootstrapping process does not require any outside input to start. Any software can be loaded as
required by the operating system rather than loading all the software automatically.
A hypervisor is a kind of emulator; it is computer software, firmware or hardware that creates and runs virtual
machines. A computer on which a hypervisor runs one or more virtual machines
Type-1 (native or bare-metal hypervisors): These
hypervisors run directly on the host's hardware to control the
hardware and to manage guest operating systems. For this
reason, they are sometimes called bare metal hypervisors.
Eg. Microsoft Hyper-V
Type-2 (hosted hypervisors): These hypervisors run on a
conventional operating system (OS) just as other computer programs
do. A guest operating system runs as a process on the host. Type-2
hypervisors abstract guest operating systems from the host operating
system. Parallels Desktop for Mac, QEMU, VirtualBox, VMware Player
and VMware Workstation are examples of type-2 hypervisors.
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
A hypervisor is a kind of emulator; it is computer software, firmware or hardware that creates and runs virtual
machines. A computer on which a hypervisor runs one or more virtual machines
Type-1 (native or bare-metal hypervisors): These
hypervisors run directly on the host's hardware to control the
hardware and to manage guest operating systems. For this
reason, they are sometimes called bare metal hypervisors.
Eg. Microsoft Hyper-V
Type-2 (hosted hypervisors): These hypervisors run on a
conventional operating system (OS) just as other computer programs
do. A guest operating system runs as a process on the host. Type-2
hypervisors abstract guest operating systems from the host operating
system. Parallels Desktop for Mac, QEMU, VirtualBox, VMware Player
and VMware Workstation are examples of type-2 hypervisors.
The term bare metal refers to the fact that there is no operating system between the virtualization
software and the hardware. The virtualization software resides on the “bare metal” or the hard disk
of the hardware, where the operating system is usually installed.
Bare metal isn’t only used to describe hypervisors. A bare metal server is a regular, single-tenant
server. However, it can be a host machine for virtual machines with the addition of a hypervisor and
virtualization software. A bare metal cloud refers to a customer renting the actual servers that host
the public cloud from a cloud service provider, in addition to renting the public cloud services
What is meant by bare metal?
https://www.vmware.com/topics/glossary/content/bare-metal-hypervisor
vSphere Hypervisor:
Virtualize servers to manage your IT infrastructure; allowing you to
consolidate your applications, while saving time and money, with the
bare-metal architecture of vSphere Hypervisor.
How does a hypervisor work?
Hypervisors support the creation and management of virtual machines (VMs) by abstracting a
computer’s software from its hardware. Hypervisors make virtualization possible by translating
requests between the physical and virtual resources. Bare-metal hypervisors are sometimes
embedded into the firmware at the same level as the motherboard basic input/output system (BIOS) to
enable the operating system on a computer to access and use virtualization software.
https://www.vmware.com/topics/glossary/content/hypervisor
Benefits of hypervisors
There are several benefits to using a hypervisor that hosts multiple virtual machines:
• Speed: Hypervisors allow virtual machines to be created instantly, unlike bare-metal servers. This
makes it easier to provision resources as needed for dynamic workloads.
• Efficiency: Hypervisors that run several virtual machines on one physical machine’s resources
also allow for more efficient utilization of one physical server. It is more cost- and energy-efficient
to run several virtual machines on one physical machine than to run multiple underutilized
physical machines for the same task.
• Flexibility: Bare-metal hypervisors allow operating systems and their associated applications to
run on a variety of hardware types because the hypervisor separates the OS from the underlying
hardware, so the software no longer relies on specific hardware devices or drivers.
• Portability: Hypervisors allow multiple operating systems to reside on the same physical server
(host machine). Because the virtual machines that the hypervisor runs are independent from the
physical machine, they are portable. IT teams can shift workloads and allocate networking,
memory, storage and processing resources across multiple servers as needed, moving from
machine to machine or platform to platform. When an application needs more processing power,
the virtualization software allows it to seamlessly access additional machines.
https://www.vmware.com/topics/glossary/content/hypervisor
Container vs hypervisor
Containers and hypervisors are both involved in making applications faster and more efficient, but they
achieve this in different ways.
Hypervisors:
Allow an operating system to run independently from the underlying hardware through the use of virtual
machines. Share virtual computing, storage and memory resources. Can run multiple operating systems on
top of one server (bare-metal hypervisor) or installed on top of one standard operating system and isolated
from it (hosted hypervisor).
Containers:
Allow applications to run independently of an operating system. Can run on any operating system—all they
need is a container engine to run. Are extremely portable since in a container, an application has everything it
needs to run.
Hypervisors and containers are used for different purposes. Hypervisors are used to create and run virtual
machines (VMs), which each have their own complete operating systems, securely isolated from the others. In
contrast to VMs, containers package up just an app and its related services. This makes them more
lightweight and portable than VMs, so they are often used for fast and flexible application development and
movement.
Disaster Recovery
What is Disaster recovery?
Disaster recovery is an organization’s method of regaining access and functionality to its IT
infrastructure after events like a natural disaster, cyber attack.
https://www.vmware.com/topics/glossary/content/disaster-recovery
Disaster Recovery
● How does disaster recovery work?
Disaster recovery relies upon the replication of data and computer processing in an off-premises location
not affected by the disaster. When servers go down because of a natural disaster, equipment failure or cyber
attack, a business needs to recover lost data from a second location where the data is backed up. Ideally, an
organization can transfer its computer processing to that remote location as well in order to continue
operations.
https://www.vmware.com/topics/glossary/content/disaster-recovery
5 top elements of an effective disaster recovery plan
1. Disaster recovery team: This assigned group of specialists will be responsible for creating, implementing and managing the disaster recovery plan. This
plan should define each team member’s role and responsibilities. In the event of a disaster, the recovery team should know how to communicate with each
other, employees, vendors, and customers.
2. Risk evaluation: Assess potential hazards that put your organization at risk. Depending on the type of event, strategize what measures and resources will
be needed to resume business. For example, in the event of a cyber attack, what data protection measures will the recovery team have in place to respond?
3. Business-critical asset identification: A good disaster recovery plan includes documentation of which systems, applications, data, and other resources are
most critical for business continuity, as well as the necessary steps to recover data.
4. Backups: Determine what needs backup (or to be relocated), who should perform backups, and how backups will be implemented. Include a recovery point
objective (RPO) that states the frequency of backups and a recovery time objective (RTO) that defines the maximum amount of downtime allowable after a
disaster. These metrics create limits to guide the choice of IT strategy, processes and procedures that make up an organization’s disaster recovery plan. The
amount of downtime an organization can handle and how frequently the organization backs up its data will inform the disaster recovery strategy.
5. Testing and optimization: The recovery team should continually test and update its strategy to address ever-evolving threats and business needs. By
continually ensuring that a company is ready to face the worst-case scenarios in disaster situations, it can successfully navigate such challenges. In planning
how to respond to a cyber attack, for example, it’s important that organizations continually test and optimize their security and data protection strategies
and have protective measures in place to detect potential security breaches.
What are the types of disaster recovery?
Back-up: This is the simplest type of disaster recovery and entails storing data off site or on a removable drive. However, just backing up data provides only minimal business continuity help, as the IT
infrastructure itself is not backed up.
Cold Site: In this type of disaster recovery, an organization sets up a basic infrastructure in a second, rarely used facility that provides a place for employees to work after a natural disaster or fire. It
can help with business continuity because business operations can continue, but it does not provide a way to protect or recover important data, so a cold site must be combined with other methods of
disaster recovery.
Hot Site: A hot site maintains up-to-date copies of data at all times. Hot sites are time-consuming to set up and more expensive than cold sites, but they dramatically reduce down time.
Disaster Recovery as a Service(DRaaS): In the event of a disaster or ransomware attack, a DRaaS provider moves an organization’s computer processing to its own cloud infrastructure, allowing a business to
continue operations seamlessly from the vendor’s location, even if an organization’s servers are down. DRaaS plans are available through either subscription or pay-per-use models. There are pros and cons to
choosing a local DRaaS provider: latency will be lower after transferring to DRaaS servers that are closer to an organization’s location, but in the event of a widespread natural disaster, a DRaaS that is nearby may
be affected by the same disaster.
Back Up as a Service: Similar to backing up data at a remote location, with Back Up as a Service, a third party provider backs up an organization’s data, but not its IT infrastructure.
Datacenter disaster recovery: The physical elements of a data center can protect data and contribute to faster disaster recovery in certain types of disasters. For instance, fire suppression tools will help data and
computer equipment survive a fire. A backup power source will help businesses sail through power outages without grinding operations to a halt. Of course, none of these physical disaster recovery tools will help
in the event of a cyber attack.
Virtualization: Organizations can back up certain operations and data or even a working replica of an organization’s entire computing environment on off-site virtual machines that are unaffected by physical
disasters. Using virtualization as part of a disaster recovery plan also allows businesses to automate some disaster recovery processes, bringing everything back online faster. For virtualization to be an effective
disaster recovery tool, frequent transfer of data and workloads is essential, as is good communication within the IT team about how many virtual machines are operating within an organization.
Point-in-time copies: Point-in-time copies, also known as point-in-time snapshots, make a copy of the entire database at a given time. Data can be restored from this back-up, but only if the copy is stored off site
or on a virtual machine that is unaffected by the disaster.
Instant recovery: Instant recovery is similar to point-in-time copies, except that instead of copying a database, instant recovery takes a snapshot of an entire virtual machine.
● How does cloud disaster recovery work?
Cloud disaster recovery takes a very different approach than traditional DR. Instead of loading the servers with
the OS and application software and patching to the last configuration used in production, cloud disaster recovery
encapsulates the entire server, which includes the operating system, applications, patches, and data into a single
software bundle or virtual server. The virtual server is then copied or backed up to an offsite data centre or spun
up on a virtual host in minutes. Since the virtual server is not dependent on hardware, the operating system,
applications, patches, and data can be migrated from one data center to another much faster than traditional DR
approaches.
https://doi.org/10.1016/B978-1-59749-305-5.00010-4
https://rukpat.wordpress.com/tag/hardware-environments/
https://docs.oracle.com/cd/E57185_01/EPMDO/ch09s02.html
https://docs.microsoft.com/en-us/azure/architecture/solution-ideas/articles/disaster-recovery-enterprise-scale-dr
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
Performance management and capacity planning
Performance management means monitoring and allocating existing data processing resources to
applications according to a Service Level Agreement (SLA) or informal service objectives.
Capacity planning is the process of planning for sufficient computer capacity in a cost-effective manner
to meet the future service needs for all users.
● Performance management: The goal of performance management is to make the best use of your
current resources to meet your current objectives, without excessive tuning effort. To formalize your
objectives, you can set up a Service Level Agreement (SLA). A
An SLA is a contract that objectively describes measurable performance factors, for example:
● Average transaction response time for network, I/O, processor, or
● total Transaction volumes
● System availability
● Capacity planning: Capacity planning involves asking the following questions:
How much of your computer resources (processor, storage, I/O, network) are being used?
Which workloads are consuming the resources (workload distribution)?
What are the expected growth rates?
When will the demands on current resources affect service levels?
Load Balancing
What is load balancing ?
In computing, load balancing refers to the process of distributing a set of tasks over a set of
resources (computing units), with the aim of making their overall processing more efficient. Load
balancing can optimize the response time and avoid unevenly overloading some compute nodes
while other compute nodes are left idle.
https://en.wikipedia.org/wiki/Load_balancing_(computing)
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
Static and dynamic algorithms
● Static: A load balancing algorithm is "static" when it does not take into account the state of the system for
the distribution of tasks.
● Dynamic: Unlike static load distribution algorithms, dynamic algorithms take into account the current load
of each of the computing units (also called nodes) in the system.
Round Robin method of load balancing (algorithm)
Disadvantage: Different request from the same IP may allocate to different servers. Even if the request are having relations
Least Packet Method
Cloud Computing storage saas iaas paas.pptx
Cloud Computing storage saas iaas paas.pptx
Unit 3
Containers
Containers are technologies that allow you to package and
isolate applications with their entire runtime environment—
all of the files necessary to run. This makes it easy to move
the contained application between environments (dev, test,
production, etc.) while retaining full functionality.
What can you do with containers?
You can deploy containers for a number of
workloads and use cases, big to small. Containers
give your team the underlying technology needed
for a cloud-native development style
Example: Docker Container
A Docker container is a popular lightweight, standalone, executable container that includes
everything needed to run an application, including libraries, system tools, code, and runtime.
Docker is also a software platform that allows developers to build, test, and deploy
containerized applications quickly.
Cloud Computing storage saas iaas paas.pptx
How does Docker work?
The Docker technology uses the Linux kernel and features of the
kernel, like Cgroups and namespaces, to segregate processes so
they can run independently. This independence is the intention
of containers—the ability to run multiple processes and apps
separately from one another to make better use of your
infrastructure
Cloud Computing storage saas iaas paas.pptx
Amazon Cloud compute services
● Amazon EC2 (https://www.youtube.com/watch?v=TsRBftzZsQo )
Comparison of AWS, GCP, Azure
● https://www.youtube.com/watch?v=n24OBVGHufQ
Amazon EC2 Quick Start
adapted from http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html
Amazon Elastic Compute Cloud (EC2)
● Amazon Machine Images (AMIs) are the basic building blocks of
Amazon EC2
● An AMI is a template that contains a software configuration (operating
system, application server and applications) that can run on Amazon’s
computing environment
● AMIs can be used to launch an instance, which is a copy of the AMI
running as a virtual server in the cloud.
Getting Started with Amazon EC2
● Step 1: Sign up for Amazon EC2
● Step 2: Create a key pair
● Step 3: Launch an Amazon EC2 instance
● Step 4: Connect to the instance
● Step 5: Customize the instance
● Step 6: Terminate instance and delete the volume created
Creating a key pair
● AWS uses public-key cryptography to encrypt and decrypt login
information.
● AWS only stores the public key, and the user stores the private key.
● There are two options for creating a key pair:
○ Have Amazon EC2 generate it for you
○ Generate it yourself using a third-party tool such as OpenSSH, then import
the public key to Amazon EC2
Generating a key pair with Amazon EC2
1. Open the Amazon EC2 console at http://console.aws.amazon.com/ec2/
2. On the navigation bar select region for the key pair
3. Click Key Pairs in the navigation pane to display the list of key pairs associated
with the account
Generating a key pair with EC2 (cont.)
4. Click Create Key Pair
5. Enter a name for the key pair in the Key Pair Name field of the dialog
box and click Create
6. The private key file, with .pem extension, will automatically be
downloaded by the browser.
Launching an Amazon EC2 instance
1. Sign in to AWS Management Console and open the Amazon EC2 console
at http://console.aws.amazon.com/ec2/
2. From the navigation bar select the region for the instance
Launching an Amazon EC2 instance (cont.)
3. From the Amazon EC2 console dashboard, click Launch Instance
Launching an Amazon EC2 instance (cont.)
4. On the Create a New Instance page, click Quick Launch Wizard
5. In Name Your Instance, enter a name for the instance
6. In Choose a Key Pair, choose an existing key pair, or create a new one
7. In Choose a Launch Configuration, a list of basic machine configurations are
displayed, from which an instance can be launched
8. Click continue to view and customize the settings for the instance
Launching an Amazon EC2 instance (cont.)
9. Select a security group for the instance. A Security Group defines the firewall
rules specifying the incoming network traffic delivered to the instance. Security
groups can be defined on the Amazon EC2 console, in Security Groups under
Network and Security
Launching an Amazon EC2 instance (cont.)
10. Review settings and click Launch to launch the instance
11. Close the confirmation page to return to EC2 console
12. Click Instances in the navigation pane to view the status of the instance. The
status is pending while the instance is launching
After the instance is launched, its status changes to running
Connecting to an Amazon EC2 instance
● There are several ways to connect to an EC2 instance once it’s
launched.
● Remote Desktop Connection is the standard way to connect to
Windows instances.
● An SSH client (standalone or web-based) is used to connect to Linux
instances.
Connecting to Linux/UNIX Instances from
Linux/UNIX with SSH
Prerequisites:
- Most Linux/UNIX computers include an SSH client by default, if not it can be
downloaded from openssh.org
- Enable SSH traffic on the instance (using security groups)
- Get the path the private key used when launching the instance
1. In a command line shell, change directory to the path of the private key file
2. Use the chmod command to make sure the private key file isn’t publicly viewable
Connecting to Linux/UNIX Instances(cont.)
3. Right click on the instance to connect to on the AWS console, and click
Connect.
4. Click Connect using a standalone SSH client.
5. Enter the example command provided in the Amazon EC2 console at the
command line shell
Transfering files to Linux/UNIX instances
from Linux/UNIX with SCP
Prerequisites:
- Enable SSH traffic on the instance
- Install an SCP client (included by default mostly)
- Get the ID of the Amazon EC2 instance, public DNS of the instance, and the path
to the private key
If the key file is My_Keypair.pem, the file to transfer is samplefile.txt, and the instance’s
DNS name is ec2-184-72-204-112.compute-1.amazonaws.com, the command below
copies the file to the ec2-user home
Terminating Instances
- If the instance launched is not in the free usage tier, as soon as the
instance starts to boot, the user is billed for each hour the instance
keeps running.
- A terminated instance cannot be restarted.
- To terminate an instance:
1. Open the Amazon EC2 console
2. In the navigation pane, click Instances
3. Right-click the instance, then click Terminate
4. Click Yes, Terminate when prompted for confirmation
Video Tutorial
https://www.youtube.com/watch?v=OLfmqcYnhUM
Google App Engine
Quick Start
adapted from
https://developers.google.com/appengine/docs
Google App Engine (GAE)
● GAE lets users run web applications on Google’s infrastructure
● GAE data storage options are:
○ Datastore: a NoSQL schemaless object datastore
○ Google Cloud SQL: Relational SQL database service
○ Google Cloud Storage: Storage service for objects and files
● All applications on GAE can use up to 1 GB of storage and enough CPU and
bandwidth to support an efficient application serving around 5 million page
views a month for free.
● Three runtime environments are supported: Java, Python and Go.
Developing Java Applications on GAE
● The easiest way to develop Java applications for GAE is to use the Eclipse
development environment with the Google plugin for Eclipse.
● App Engine Java applications use the Java Servlet standard for interacting
with the web server environment.
● An application’s files, including compiled classes, JARs, static files and
configuration files, are arranged in a directory structure using the WAR
standard layout for Java web applications.
Running a Java Project
● The App Engine SDK includes a web server application to test applications. The server simulates the
complete GAE environment.
● The project can be run using the “Debug As > Web Application” option of Eclipse or using Ant.
● After running the server, the application can be tested by visiting the server’s URL in a Web browser.
Uploading an Application to GAE
● Applications are created and managed using the Administration
Console at https://appengine.google.com.
● Once an application ID is registered for an application, the application
can be uploaded to GAE using the Eclipse plugin or a command-line
tool in the SDK.
● After uploading, the application can be accessed from a Web browser. If
a free appspot.com account was used for registration, the URL for the
application will be http://app_id.appspot.com/, where app_id the
application id assigned during registration.
Comparison of AWS, GCP, Azure
● https://www.youtube.com/watch?v=n24OBVGHufQ
Parameter AWS Azure Google Cloud Platform
App Testing It uses device farm It uses DevTest labs It uses Cloud Test labs.
API Management Amazon API gateway Azure API gateway Cloud endpoints.
Kubernetes
Management
EKS Kubernetes service Kubernetes engine
Git Repositories AWS source repositories Azure source repositories Cloud source repositories.
Data warehouse Redshift SQL warehouse Big Query
Object Storage S3 Block Blobs and files Google cloud storage.
Relational DB RDS Relational DBs Google Cloud SQL
Block Storage EBS Page Blobs Persistent disks
Marketplace AWS Azure G suite
File Storage EFS Azure Files ZFS and Avere
Media Services Amazon Elastic transcoder Azure media services Cloud video intelligence
API
Virtual network VPC VNet Subnet
Pricing Per hour Per minute Per minute
Maximum processors in
VM
128 128 96
Maximum memory in VM
(GiB)
3904 3800 1433
Catching ElasticCache RedisCache CloudCDN
Load Balancing
Configuration
Elastic Load Balancing Load Balancer Application
Gateway
Cloud Load Balancing
Global Content Delivery
Networks
CloudFront Content Delivery Network Cloud I
https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison
Details AWS Azure GCP
Compute Services
1) AWS Beanstalk
2) Amazon EC2
3) Amazon EC2 Auto-Scaling
4) Amazon Elastic Container
Registry
5) Amazon Elastic Kubernetes
Service
6) Amazon Lightsail
7) AWS Serverless Application
Repository
8) VMware Cloud for AWS
9) AWS Batch
10) AWS Fargate
11) AWS Lambda
12) AWS Outposts
13) Elastic Load Balancing
1) Platform-as-a-service (PaaS)
2) Function-as-a-service (FaaS)
3) Service Fabric
4) Azure Batch
5) Cloud Services
6) Container Instances Batch
7) Azure Container Service (AKS)
8) Virtual Machines Compute
Engine
9) Virtual Machine Scale Sets
1) App Engine
2) Docker Container Registry
3) Instant Groups
4) Compute Engine
5) Graphics Processing Unit
(GPU)
6) Knative
7) Kubernetes
8) Functions
Storage Services
1) Simple Storage Service (S3)
2) Elastic Block Storage (EBS)
3) Elastic File System (EFS)
4) Storage Gateway
5) Snowball
6) Snowball Edge
7) Snowmobile
1) Blob Storage
2) Queue Storage
3) File Storage
4) Disk Storage
5) Data Lake Store
1) Cloud Storage
2) Persistent Disk
3) Transfer Appliance
4) Transfer Service
https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison
AI/ML
1) SageMaker
2) Comprehend
3) Lex
4) Polly
5) Rekognition
6) Machine Learning
7) Translate
8) Transcribe
9) DeepLens
10) Deep Learning AMIs
11) Apache MXNet on AWS
12) TensorFlow on AWS
1) Machine Learning
2) Azure Bot Service
3) Cognitive Services
1) Cloud Machine Learning Engine
2) Dialogflow Enterprise Edition
5) Cloud Natural Language
6) Cloud Speech API
7) Cloud Translation API
8) Cloud Video Intelligence
9) Cloud Job Discovery (Private Beta)
Database Services
1) Aurora
2) RDS
3) DynamoDB
4) ElastiCache
5) Redshift
6) Neptune
7) Database Migration Service
1) SQL Database
2) Database for MySQL
3) Database for PostgreSQL
4) Data Warehouse
5) Server Stretch Database
6) Cosmos DB
7) Table Storage
8) Redis Cache
9) Data Factory
1) Cloud SQL
2) Cloud Bigtable
3) Cloud Spanner
4) Cloud Datastore
Backup Services Glacier
1) Archive Storage
2) Backup
3) Site Recovery
1) Nearline (frequently accessed
data)
2) Coldline (infrequently accessed
data)
Serverless computing
1) Lambda
2) Serverless Application Repository Functions Google Cloud Functions
https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison
Strengths
1) Dominant market position
2) Extensive, mature offerings
3) Support for large
organizations
4) Global reach
5) Flexibility and a wider range
of services
1) Second largest provider
2) Integration with Microsoft
tools and software
3) Broad feature set
4) Hybrid cloud
5) Support for open source
6) Ideal for startups and
developers
1) Designed for cloud-native
businesses
2) Commitment to open source
and portability
3) Flexible contracts
4) DevOps expertise
5) Complete container-based
model
6) Most cost-efficient
Caching Elastic Cache Redis Cache Cloud CDN
File Storage EFS Azure Files ZFS and Avere
Networking
Amazon Virtual Private Cloud
(VPC)
Azure Virtual Network (VNET) Cloud Virtual Network
Security AWS Security Hub Azure Security Center Cloud Security Command Center
Location
77 availability zones within 24
geographic regions
Presence in 60+ regions across
the world
Presence in 24 regions and 73
zones. Available in 200+
countries and territories
Documentation Best in class High quality High quality
DNS Services Amazon Route 53 Azure Traffic Manager Cloud DNS
Notifications
Amazon Simple Notification
Service (SNS)
Azure Notification Hub None
Load Balancing Elastic Load Balancing Load Balancing for Azure Cloud Load Balancing
Automation AWS Opsworks Azure Automation Compute Engine Management
Compliance AWS CloudHSM Azure Trust Center Google Cloud Platform Security
Pricing/ Discount Options
One-year free trial along with a
discount of up to 75% for a 1-3
year commitment
Up to 75% discount for a
commitment ranging from one
to three years
GCP Credit of $300 for 12
months apart from a sustained
use discount of up to 30%
https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison

More Related Content

PPTX
week 3 cloud computing northumbria foudation
PDF
Lecture5 virtualization
PPTX
Chap 2 virtulizatin
PPTX
aravind_kmdfdgmfmfmmfmkmkmmgmbmgmbmgbmgmkm.pptx
PPTX
lecture5-virtualization-190301171613.pptx
PPT
Cloud Computing using virtulization
PPTX
Qinnova Cloud Computing Session
PPTX
UNIT 2_cloud Computing.pptx Virtualization
week 3 cloud computing northumbria foudation
Lecture5 virtualization
Chap 2 virtulizatin
aravind_kmdfdgmfmfmmfmkmkmmgmbmgmbmgbmgmkm.pptx
lecture5-virtualization-190301171613.pptx
Cloud Computing using virtulization
Qinnova Cloud Computing Session
UNIT 2_cloud Computing.pptx Virtualization

Similar to Cloud Computing storage saas iaas paas.pptx (20)

PPTX
Parth virt
PPTX
VIRTUALIZATION for computer science.pptx
PDF
IRJET- A Survey on Virtualization and Attacks on Virtual Machine Monitor (VMM)
PDF
Cloud computing technologies and virtualization
PPTX
Virtualization&cloud computing
PPTX
Virtualization Technique.pptx in operating systems
PPTX
Sna lab prj (1)
PPTX
Virtualization vs. Cloud Computing: What's the Difference?
PPTX
Virtuaization jwneilhw pehfpijwrhfipuwrhiwh iufhgipuhriph riup hiuefhv 9ufeh
PDF
Virtualization for Cloud Environment
PDF
virtualizationcloudcomputing-140813101008-phpapp02.pdf
PPTX
Virtualization & cloud computing
PPTX
Introductin to virtualization i
PDF
9-cloud-computing.pdf
PPTX
Cloud computing Fundamentals-Cloud Technologies and Concepts.pptx
PPTX
Virtualization unit 3.pptx
PPTX
Server Virtualization
PPT
Intro cloud-1
PPT
Intro cloud-1
PPTX
Virtualization
Parth virt
VIRTUALIZATION for computer science.pptx
IRJET- A Survey on Virtualization and Attacks on Virtual Machine Monitor (VMM)
Cloud computing technologies and virtualization
Virtualization&cloud computing
Virtualization Technique.pptx in operating systems
Sna lab prj (1)
Virtualization vs. Cloud Computing: What's the Difference?
Virtuaization jwneilhw pehfpijwrhfipuwrhiwh iufhgipuhriph riup hiuefhv 9ufeh
Virtualization for Cloud Environment
virtualizationcloudcomputing-140813101008-phpapp02.pdf
Virtualization & cloud computing
Introductin to virtualization i
9-cloud-computing.pdf
Cloud computing Fundamentals-Cloud Technologies and Concepts.pptx
Virtualization unit 3.pptx
Server Virtualization
Intro cloud-1
Intro cloud-1
Virtualization
Ad

Recently uploaded (20)

PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Lecture Notes Electrical Wiring System Components
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
web development for engineering and engineering
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
composite construction of structures.pdf
PPT
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
PDF
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
PPTX
Artificial Intelligence
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Safety Seminar civil to be ensured for safe working.
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPT
introduction to datamining and warehousing
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
CYBER-CRIMES AND SECURITY A guide to understanding
Lecture Notes Electrical Wiring System Components
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
web development for engineering and engineering
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
composite construction of structures.pdf
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
Unit I ESSENTIAL OF DIGITAL MARKETING.pdf
Artificial Intelligence
OOP with Java - Java Introduction (Basics)
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Safety Seminar civil to be ensured for safe working.
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
introduction to datamining and warehousing
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Ad

Cloud Computing storage saas iaas paas.pptx

  • 5. What is cloud computing ? Cloud computing is the delivery of computing services over internet.
  • 6. Cloud Characteristics ● Cloud systems automatically control and optimize the resources used. ● It Leverage a metering capability at the level of abstraction ( such as compute engine, RAM, Storage as well as bandwidth) ● Central monitoring feasibility for both the consumer and provider. ● Capabilities can be elastically provisioned
  • 7. Common Cloud Characteristics ● Massive Scaling ● Geographic Distribution ● Virtualization ● Service orientation ● Low cost ● Enhanced security
  • 9. Software as a Service (SaaS) ● The software applications deployed on cloud and provided to client. Here the client does not manage or control the underlying cloud infrastructure such as network, servers, operating systems, storage etc. Example: Google Doc,
  • 10. Platform as a service (PaaS) ● Applications deployed on cloud and avail by the users from multiple devices of various types.
  • 11. Cloud Infrastructure as a Service (IaaS) ● Provision of processing, storage, network and other fundamental computing resources. ● Consumers can deploy and run arbitrary softwares.
  • 18. Reference Architecture ● Basis for documentation, project communication • Stakeholder and team communication ● Payment, contract, and cost models Technical Architecture ● Structuring according to XaaS Stack ● Adopting Cloud Platform paradigms ● Structuring cloud services and cloud components ● Showing relationships and external endpoints ● Middleware and communication ● Management and security Deployment Operation Architecture ● Geo-location check (Legal issues, export control) • Operation and monitoring
  • 23. Coutinho, E.F., de Carvalho Sousa, F.R., Rego, P.A.L. et al. Elasticity in cloud computing: a survey. Ann. Telecommun. 70, 289–309 (2015). https://doi.org/10.1007/s12243-014-0450-7
  • 25. Coutinho, Emanuel & Gomes, Danielo & Souza, Jose. (2015). An Autonomic Computing-based Architecture for Cloud Computing Elasticity. 10.1109/lanoms.2015.7332681.
  • 27. BENEFITS OF ELASTIC COMPUTING Elasticity in the cloud has brought a turnaround in business storage. It has innumerable benefits to every business that can be summarized as follows: • Simple scalability and high performance: Any kind of infrastructure and services required by the business organization are quickly provided with the assistance of computing services. Scalability being the core feature of cloud deployments, the performance is enhanced and excellent speed for computations is ensured. • Cost-efficient: With elastic computing in hand, the cost for the organizations are reduced drastically as there is no need for capital infrastructure for IT as well the payment is done only for the usage. • Greater redundancy: The opportunity for better flexibility, reliability, affordability, and recovery solutions is assured. • More capacity: Unlimited storage capacity is available for business organizations with elastic cloud computing. Being virtual it can be accessed from anywhere anytime across the network. • High availability: The access of files has been simple and available all the time with Cloud services. Also, view and modify options are available, The system breakdown is negligible with alternative backup. • Easier management: The era of maintaining, upgrading, and deploying IT infrastructure has become a past and the IT teams are relieved. • Environment friendly: Cloud is highly environment friendly as it has lesser consumption of resources. https://www.jigsawacademy.com/blogs/cloud-computing/elastic-computing
  • 31. Virtualization and Physical computation resources ● What is virtualization ? Creation of a virtual machine over existing operating system and hardware is known as Hardware Virtualization. “Virtualization” that refers to the process of making a “virtual version” of hardware or software, infrastructure, devices and computing resources. Virtualization enables users to disjoint operating systems from the underlying hardware, i.e, users can run multiple operating systems such as Windows, Linux, on a single physical machine at the same time
  • 34. The bootstrapping process does not require any outside input to start. Any software can be loaded as required by the operating system rather than loading all the software automatically.
  • 35. A hypervisor is a kind of emulator; it is computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines Type-1 (native or bare-metal hypervisors): These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors. Eg. Microsoft Hyper-V Type-2 (hosted hypervisors): These hypervisors run on a conventional operating system (OS) just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system. Parallels Desktop for Mac, QEMU, VirtualBox, VMware Player and VMware Workstation are examples of type-2 hypervisors.
  • 38. A hypervisor is a kind of emulator; it is computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines Type-1 (native or bare-metal hypervisors): These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors. Eg. Microsoft Hyper-V Type-2 (hosted hypervisors): These hypervisors run on a conventional operating system (OS) just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system. Parallels Desktop for Mac, QEMU, VirtualBox, VMware Player and VMware Workstation are examples of type-2 hypervisors.
  • 39. The term bare metal refers to the fact that there is no operating system between the virtualization software and the hardware. The virtualization software resides on the “bare metal” or the hard disk of the hardware, where the operating system is usually installed. Bare metal isn’t only used to describe hypervisors. A bare metal server is a regular, single-tenant server. However, it can be a host machine for virtual machines with the addition of a hypervisor and virtualization software. A bare metal cloud refers to a customer renting the actual servers that host the public cloud from a cloud service provider, in addition to renting the public cloud services What is meant by bare metal? https://www.vmware.com/topics/glossary/content/bare-metal-hypervisor
  • 40. vSphere Hypervisor: Virtualize servers to manage your IT infrastructure; allowing you to consolidate your applications, while saving time and money, with the bare-metal architecture of vSphere Hypervisor.
  • 41. How does a hypervisor work? Hypervisors support the creation and management of virtual machines (VMs) by abstracting a computer’s software from its hardware. Hypervisors make virtualization possible by translating requests between the physical and virtual resources. Bare-metal hypervisors are sometimes embedded into the firmware at the same level as the motherboard basic input/output system (BIOS) to enable the operating system on a computer to access and use virtualization software. https://www.vmware.com/topics/glossary/content/hypervisor
  • 42. Benefits of hypervisors There are several benefits to using a hypervisor that hosts multiple virtual machines: • Speed: Hypervisors allow virtual machines to be created instantly, unlike bare-metal servers. This makes it easier to provision resources as needed for dynamic workloads. • Efficiency: Hypervisors that run several virtual machines on one physical machine’s resources also allow for more efficient utilization of one physical server. It is more cost- and energy-efficient to run several virtual machines on one physical machine than to run multiple underutilized physical machines for the same task. • Flexibility: Bare-metal hypervisors allow operating systems and their associated applications to run on a variety of hardware types because the hypervisor separates the OS from the underlying hardware, so the software no longer relies on specific hardware devices or drivers. • Portability: Hypervisors allow multiple operating systems to reside on the same physical server (host machine). Because the virtual machines that the hypervisor runs are independent from the physical machine, they are portable. IT teams can shift workloads and allocate networking, memory, storage and processing resources across multiple servers as needed, moving from machine to machine or platform to platform. When an application needs more processing power, the virtualization software allows it to seamlessly access additional machines. https://www.vmware.com/topics/glossary/content/hypervisor
  • 43. Container vs hypervisor Containers and hypervisors are both involved in making applications faster and more efficient, but they achieve this in different ways. Hypervisors: Allow an operating system to run independently from the underlying hardware through the use of virtual machines. Share virtual computing, storage and memory resources. Can run multiple operating systems on top of one server (bare-metal hypervisor) or installed on top of one standard operating system and isolated from it (hosted hypervisor). Containers: Allow applications to run independently of an operating system. Can run on any operating system—all they need is a container engine to run. Are extremely portable since in a container, an application has everything it needs to run. Hypervisors and containers are used for different purposes. Hypervisors are used to create and run virtual machines (VMs), which each have their own complete operating systems, securely isolated from the others. In contrast to VMs, containers package up just an app and its related services. This makes them more lightweight and portable than VMs, so they are often used for fast and flexible application development and movement.
  • 44. Disaster Recovery What is Disaster recovery? Disaster recovery is an organization’s method of regaining access and functionality to its IT infrastructure after events like a natural disaster, cyber attack. https://www.vmware.com/topics/glossary/content/disaster-recovery
  • 45. Disaster Recovery ● How does disaster recovery work? Disaster recovery relies upon the replication of data and computer processing in an off-premises location not affected by the disaster. When servers go down because of a natural disaster, equipment failure or cyber attack, a business needs to recover lost data from a second location where the data is backed up. Ideally, an organization can transfer its computer processing to that remote location as well in order to continue operations. https://www.vmware.com/topics/glossary/content/disaster-recovery
  • 46. 5 top elements of an effective disaster recovery plan 1. Disaster recovery team: This assigned group of specialists will be responsible for creating, implementing and managing the disaster recovery plan. This plan should define each team member’s role and responsibilities. In the event of a disaster, the recovery team should know how to communicate with each other, employees, vendors, and customers. 2. Risk evaluation: Assess potential hazards that put your organization at risk. Depending on the type of event, strategize what measures and resources will be needed to resume business. For example, in the event of a cyber attack, what data protection measures will the recovery team have in place to respond? 3. Business-critical asset identification: A good disaster recovery plan includes documentation of which systems, applications, data, and other resources are most critical for business continuity, as well as the necessary steps to recover data. 4. Backups: Determine what needs backup (or to be relocated), who should perform backups, and how backups will be implemented. Include a recovery point objective (RPO) that states the frequency of backups and a recovery time objective (RTO) that defines the maximum amount of downtime allowable after a disaster. These metrics create limits to guide the choice of IT strategy, processes and procedures that make up an organization’s disaster recovery plan. The amount of downtime an organization can handle and how frequently the organization backs up its data will inform the disaster recovery strategy. 5. Testing and optimization: The recovery team should continually test and update its strategy to address ever-evolving threats and business needs. By continually ensuring that a company is ready to face the worst-case scenarios in disaster situations, it can successfully navigate such challenges. In planning how to respond to a cyber attack, for example, it’s important that organizations continually test and optimize their security and data protection strategies and have protective measures in place to detect potential security breaches.
  • 47. What are the types of disaster recovery? Back-up: This is the simplest type of disaster recovery and entails storing data off site or on a removable drive. However, just backing up data provides only minimal business continuity help, as the IT infrastructure itself is not backed up. Cold Site: In this type of disaster recovery, an organization sets up a basic infrastructure in a second, rarely used facility that provides a place for employees to work after a natural disaster or fire. It can help with business continuity because business operations can continue, but it does not provide a way to protect or recover important data, so a cold site must be combined with other methods of disaster recovery. Hot Site: A hot site maintains up-to-date copies of data at all times. Hot sites are time-consuming to set up and more expensive than cold sites, but they dramatically reduce down time. Disaster Recovery as a Service(DRaaS): In the event of a disaster or ransomware attack, a DRaaS provider moves an organization’s computer processing to its own cloud infrastructure, allowing a business to continue operations seamlessly from the vendor’s location, even if an organization’s servers are down. DRaaS plans are available through either subscription or pay-per-use models. There are pros and cons to choosing a local DRaaS provider: latency will be lower after transferring to DRaaS servers that are closer to an organization’s location, but in the event of a widespread natural disaster, a DRaaS that is nearby may be affected by the same disaster. Back Up as a Service: Similar to backing up data at a remote location, with Back Up as a Service, a third party provider backs up an organization’s data, but not its IT infrastructure. Datacenter disaster recovery: The physical elements of a data center can protect data and contribute to faster disaster recovery in certain types of disasters. For instance, fire suppression tools will help data and computer equipment survive a fire. A backup power source will help businesses sail through power outages without grinding operations to a halt. Of course, none of these physical disaster recovery tools will help in the event of a cyber attack. Virtualization: Organizations can back up certain operations and data or even a working replica of an organization’s entire computing environment on off-site virtual machines that are unaffected by physical disasters. Using virtualization as part of a disaster recovery plan also allows businesses to automate some disaster recovery processes, bringing everything back online faster. For virtualization to be an effective disaster recovery tool, frequent transfer of data and workloads is essential, as is good communication within the IT team about how many virtual machines are operating within an organization. Point-in-time copies: Point-in-time copies, also known as point-in-time snapshots, make a copy of the entire database at a given time. Data can be restored from this back-up, but only if the copy is stored off site or on a virtual machine that is unaffected by the disaster. Instant recovery: Instant recovery is similar to point-in-time copies, except that instead of copying a database, instant recovery takes a snapshot of an entire virtual machine.
  • 48. ● How does cloud disaster recovery work? Cloud disaster recovery takes a very different approach than traditional DR. Instead of loading the servers with the OS and application software and patching to the last configuration used in production, cloud disaster recovery encapsulates the entire server, which includes the operating system, applications, patches, and data into a single software bundle or virtual server. The virtual server is then copied or backed up to an offsite data centre or spun up on a virtual host in minutes. Since the virtual server is not dependent on hardware, the operating system, applications, patches, and data can be migrated from one data center to another much faster than traditional DR approaches.
  • 56. Performance management and capacity planning Performance management means monitoring and allocating existing data processing resources to applications according to a Service Level Agreement (SLA) or informal service objectives. Capacity planning is the process of planning for sufficient computer capacity in a cost-effective manner to meet the future service needs for all users. ● Performance management: The goal of performance management is to make the best use of your current resources to meet your current objectives, without excessive tuning effort. To formalize your objectives, you can set up a Service Level Agreement (SLA). A An SLA is a contract that objectively describes measurable performance factors, for example: ● Average transaction response time for network, I/O, processor, or ● total Transaction volumes ● System availability
  • 57. ● Capacity planning: Capacity planning involves asking the following questions: How much of your computer resources (processor, storage, I/O, network) are being used? Which workloads are consuming the resources (workload distribution)? What are the expected growth rates? When will the demands on current resources affect service levels?
  • 58. Load Balancing What is load balancing ? In computing, load balancing refers to the process of distributing a set of tasks over a set of resources (computing units), with the aim of making their overall processing more efficient. Load balancing can optimize the response time and avoid unevenly overloading some compute nodes while other compute nodes are left idle. https://en.wikipedia.org/wiki/Load_balancing_(computing)
  • 61. Static and dynamic algorithms ● Static: A load balancing algorithm is "static" when it does not take into account the state of the system for the distribution of tasks. ● Dynamic: Unlike static load distribution algorithms, dynamic algorithms take into account the current load of each of the computing units (also called nodes) in the system.
  • 62. Round Robin method of load balancing (algorithm) Disadvantage: Different request from the same IP may allocate to different servers. Even if the request are having relations
  • 67. Containers Containers are technologies that allow you to package and isolate applications with their entire runtime environment— all of the files necessary to run. This makes it easy to move the contained application between environments (dev, test, production, etc.) while retaining full functionality.
  • 68. What can you do with containers? You can deploy containers for a number of workloads and use cases, big to small. Containers give your team the underlying technology needed for a cloud-native development style
  • 69. Example: Docker Container A Docker container is a popular lightweight, standalone, executable container that includes everything needed to run an application, including libraries, system tools, code, and runtime. Docker is also a software platform that allows developers to build, test, and deploy containerized applications quickly.
  • 71. How does Docker work? The Docker technology uses the Linux kernel and features of the kernel, like Cgroups and namespaces, to segregate processes so they can run independently. This independence is the intention of containers—the ability to run multiple processes and apps separately from one another to make better use of your infrastructure
  • 73. Amazon Cloud compute services ● Amazon EC2 (https://www.youtube.com/watch?v=TsRBftzZsQo )
  • 74. Comparison of AWS, GCP, Azure ● https://www.youtube.com/watch?v=n24OBVGHufQ
  • 75. Amazon EC2 Quick Start adapted from http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EC2_GetStarted.html
  • 76. Amazon Elastic Compute Cloud (EC2) ● Amazon Machine Images (AMIs) are the basic building blocks of Amazon EC2 ● An AMI is a template that contains a software configuration (operating system, application server and applications) that can run on Amazon’s computing environment ● AMIs can be used to launch an instance, which is a copy of the AMI running as a virtual server in the cloud.
  • 77. Getting Started with Amazon EC2 ● Step 1: Sign up for Amazon EC2 ● Step 2: Create a key pair ● Step 3: Launch an Amazon EC2 instance ● Step 4: Connect to the instance ● Step 5: Customize the instance ● Step 6: Terminate instance and delete the volume created
  • 78. Creating a key pair ● AWS uses public-key cryptography to encrypt and decrypt login information. ● AWS only stores the public key, and the user stores the private key. ● There are two options for creating a key pair: ○ Have Amazon EC2 generate it for you ○ Generate it yourself using a third-party tool such as OpenSSH, then import the public key to Amazon EC2
  • 79. Generating a key pair with Amazon EC2 1. Open the Amazon EC2 console at http://console.aws.amazon.com/ec2/ 2. On the navigation bar select region for the key pair 3. Click Key Pairs in the navigation pane to display the list of key pairs associated with the account
  • 80. Generating a key pair with EC2 (cont.) 4. Click Create Key Pair 5. Enter a name for the key pair in the Key Pair Name field of the dialog box and click Create 6. The private key file, with .pem extension, will automatically be downloaded by the browser.
  • 81. Launching an Amazon EC2 instance 1. Sign in to AWS Management Console and open the Amazon EC2 console at http://console.aws.amazon.com/ec2/ 2. From the navigation bar select the region for the instance
  • 82. Launching an Amazon EC2 instance (cont.) 3. From the Amazon EC2 console dashboard, click Launch Instance
  • 83. Launching an Amazon EC2 instance (cont.) 4. On the Create a New Instance page, click Quick Launch Wizard 5. In Name Your Instance, enter a name for the instance 6. In Choose a Key Pair, choose an existing key pair, or create a new one 7. In Choose a Launch Configuration, a list of basic machine configurations are displayed, from which an instance can be launched 8. Click continue to view and customize the settings for the instance
  • 84. Launching an Amazon EC2 instance (cont.) 9. Select a security group for the instance. A Security Group defines the firewall rules specifying the incoming network traffic delivered to the instance. Security groups can be defined on the Amazon EC2 console, in Security Groups under Network and Security
  • 85. Launching an Amazon EC2 instance (cont.) 10. Review settings and click Launch to launch the instance 11. Close the confirmation page to return to EC2 console 12. Click Instances in the navigation pane to view the status of the instance. The status is pending while the instance is launching After the instance is launched, its status changes to running
  • 86. Connecting to an Amazon EC2 instance ● There are several ways to connect to an EC2 instance once it’s launched. ● Remote Desktop Connection is the standard way to connect to Windows instances. ● An SSH client (standalone or web-based) is used to connect to Linux instances.
  • 87. Connecting to Linux/UNIX Instances from Linux/UNIX with SSH Prerequisites: - Most Linux/UNIX computers include an SSH client by default, if not it can be downloaded from openssh.org - Enable SSH traffic on the instance (using security groups) - Get the path the private key used when launching the instance 1. In a command line shell, change directory to the path of the private key file 2. Use the chmod command to make sure the private key file isn’t publicly viewable
  • 88. Connecting to Linux/UNIX Instances(cont.) 3. Right click on the instance to connect to on the AWS console, and click Connect. 4. Click Connect using a standalone SSH client. 5. Enter the example command provided in the Amazon EC2 console at the command line shell
  • 89. Transfering files to Linux/UNIX instances from Linux/UNIX with SCP Prerequisites: - Enable SSH traffic on the instance - Install an SCP client (included by default mostly) - Get the ID of the Amazon EC2 instance, public DNS of the instance, and the path to the private key If the key file is My_Keypair.pem, the file to transfer is samplefile.txt, and the instance’s DNS name is ec2-184-72-204-112.compute-1.amazonaws.com, the command below copies the file to the ec2-user home
  • 90. Terminating Instances - If the instance launched is not in the free usage tier, as soon as the instance starts to boot, the user is billed for each hour the instance keeps running. - A terminated instance cannot be restarted. - To terminate an instance: 1. Open the Amazon EC2 console 2. In the navigation pane, click Instances 3. Right-click the instance, then click Terminate 4. Click Yes, Terminate when prompted for confirmation
  • 92. Google App Engine Quick Start adapted from https://developers.google.com/appengine/docs
  • 93. Google App Engine (GAE) ● GAE lets users run web applications on Google’s infrastructure ● GAE data storage options are: ○ Datastore: a NoSQL schemaless object datastore ○ Google Cloud SQL: Relational SQL database service ○ Google Cloud Storage: Storage service for objects and files ● All applications on GAE can use up to 1 GB of storage and enough CPU and bandwidth to support an efficient application serving around 5 million page views a month for free. ● Three runtime environments are supported: Java, Python and Go.
  • 94. Developing Java Applications on GAE ● The easiest way to develop Java applications for GAE is to use the Eclipse development environment with the Google plugin for Eclipse. ● App Engine Java applications use the Java Servlet standard for interacting with the web server environment. ● An application’s files, including compiled classes, JARs, static files and configuration files, are arranged in a directory structure using the WAR standard layout for Java web applications.
  • 95. Running a Java Project ● The App Engine SDK includes a web server application to test applications. The server simulates the complete GAE environment. ● The project can be run using the “Debug As > Web Application” option of Eclipse or using Ant. ● After running the server, the application can be tested by visiting the server’s URL in a Web browser.
  • 96. Uploading an Application to GAE ● Applications are created and managed using the Administration Console at https://appengine.google.com. ● Once an application ID is registered for an application, the application can be uploaded to GAE using the Eclipse plugin or a command-line tool in the SDK. ● After uploading, the application can be accessed from a Web browser. If a free appspot.com account was used for registration, the URL for the application will be http://app_id.appspot.com/, where app_id the application id assigned during registration.
  • 97. Comparison of AWS, GCP, Azure ● https://www.youtube.com/watch?v=n24OBVGHufQ
  • 98. Parameter AWS Azure Google Cloud Platform App Testing It uses device farm It uses DevTest labs It uses Cloud Test labs. API Management Amazon API gateway Azure API gateway Cloud endpoints. Kubernetes Management EKS Kubernetes service Kubernetes engine Git Repositories AWS source repositories Azure source repositories Cloud source repositories. Data warehouse Redshift SQL warehouse Big Query Object Storage S3 Block Blobs and files Google cloud storage. Relational DB RDS Relational DBs Google Cloud SQL Block Storage EBS Page Blobs Persistent disks Marketplace AWS Azure G suite File Storage EFS Azure Files ZFS and Avere Media Services Amazon Elastic transcoder Azure media services Cloud video intelligence API Virtual network VPC VNet Subnet Pricing Per hour Per minute Per minute Maximum processors in VM 128 128 96 Maximum memory in VM (GiB) 3904 3800 1433 Catching ElasticCache RedisCache CloudCDN Load Balancing Configuration Elastic Load Balancing Load Balancer Application Gateway Cloud Load Balancing Global Content Delivery Networks CloudFront Content Delivery Network Cloud I https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison
  • 99. Details AWS Azure GCP Compute Services 1) AWS Beanstalk 2) Amazon EC2 3) Amazon EC2 Auto-Scaling 4) Amazon Elastic Container Registry 5) Amazon Elastic Kubernetes Service 6) Amazon Lightsail 7) AWS Serverless Application Repository 8) VMware Cloud for AWS 9) AWS Batch 10) AWS Fargate 11) AWS Lambda 12) AWS Outposts 13) Elastic Load Balancing 1) Platform-as-a-service (PaaS) 2) Function-as-a-service (FaaS) 3) Service Fabric 4) Azure Batch 5) Cloud Services 6) Container Instances Batch 7) Azure Container Service (AKS) 8) Virtual Machines Compute Engine 9) Virtual Machine Scale Sets 1) App Engine 2) Docker Container Registry 3) Instant Groups 4) Compute Engine 5) Graphics Processing Unit (GPU) 6) Knative 7) Kubernetes 8) Functions Storage Services 1) Simple Storage Service (S3) 2) Elastic Block Storage (EBS) 3) Elastic File System (EFS) 4) Storage Gateway 5) Snowball 6) Snowball Edge 7) Snowmobile 1) Blob Storage 2) Queue Storage 3) File Storage 4) Disk Storage 5) Data Lake Store 1) Cloud Storage 2) Persistent Disk 3) Transfer Appliance 4) Transfer Service https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison
  • 100. AI/ML 1) SageMaker 2) Comprehend 3) Lex 4) Polly 5) Rekognition 6) Machine Learning 7) Translate 8) Transcribe 9) DeepLens 10) Deep Learning AMIs 11) Apache MXNet on AWS 12) TensorFlow on AWS 1) Machine Learning 2) Azure Bot Service 3) Cognitive Services 1) Cloud Machine Learning Engine 2) Dialogflow Enterprise Edition 5) Cloud Natural Language 6) Cloud Speech API 7) Cloud Translation API 8) Cloud Video Intelligence 9) Cloud Job Discovery (Private Beta) Database Services 1) Aurora 2) RDS 3) DynamoDB 4) ElastiCache 5) Redshift 6) Neptune 7) Database Migration Service 1) SQL Database 2) Database for MySQL 3) Database for PostgreSQL 4) Data Warehouse 5) Server Stretch Database 6) Cosmos DB 7) Table Storage 8) Redis Cache 9) Data Factory 1) Cloud SQL 2) Cloud Bigtable 3) Cloud Spanner 4) Cloud Datastore Backup Services Glacier 1) Archive Storage 2) Backup 3) Site Recovery 1) Nearline (frequently accessed data) 2) Coldline (infrequently accessed data) Serverless computing 1) Lambda 2) Serverless Application Repository Functions Google Cloud Functions https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison
  • 101. Strengths 1) Dominant market position 2) Extensive, mature offerings 3) Support for large organizations 4) Global reach 5) Flexibility and a wider range of services 1) Second largest provider 2) Integration with Microsoft tools and software 3) Broad feature set 4) Hybrid cloud 5) Support for open source 6) Ideal for startups and developers 1) Designed for cloud-native businesses 2) Commitment to open source and portability 3) Flexible contracts 4) DevOps expertise 5) Complete container-based model 6) Most cost-efficient Caching Elastic Cache Redis Cache Cloud CDN File Storage EFS Azure Files ZFS and Avere Networking Amazon Virtual Private Cloud (VPC) Azure Virtual Network (VNET) Cloud Virtual Network Security AWS Security Hub Azure Security Center Cloud Security Command Center Location 77 availability zones within 24 geographic regions Presence in 60+ regions across the world Presence in 24 regions and 73 zones. Available in 200+ countries and territories Documentation Best in class High quality High quality DNS Services Amazon Route 53 Azure Traffic Manager Cloud DNS Notifications Amazon Simple Notification Service (SNS) Azure Notification Hub None Load Balancing Elastic Load Balancing Load Balancing for Azure Cloud Load Balancing Automation AWS Opsworks Azure Automation Compute Engine Management Compliance AWS CloudHSM Azure Trust Center Google Cloud Platform Security Pricing/ Discount Options One-year free trial along with a discount of up to 75% for a 1-3 year commitment Up to 75% discount for a commitment ranging from one to three years GCP Credit of $300 for 12 months apart from a sustained use discount of up to 30% https://cloud.google.com/free/docs/aws-azure-gcp-service-comparison