AWS-Amazon Web Services Cloud Practitioner
This is a prerequisite for any of the other courses.
AWS has data centers distributed worldwide and they offer on demand delivery of IT resources, also shared and dedicated resources. This approach allows users to share resources (but the accounts are still isolated at a hypervizor lvl - hypervizor = a component that serves as the main pillar of virtualization in the cloud computing system). The pricing model for this infrastructure is pay as you go.
The infrastructure is organized into a range of different product types:
computing power
data storage
database as a Services
noSql DB, graphDB, Relational DB
And all of this infrastructure can be used on a
pay as you go model.
AWS infrastructure is divided into
geographic regions. Those
geographic regions are divided into
availability zones.
e.g. the N.Virginia region is the largest
one and supports all of the available AWS services. If we use a smaller
region we may encounter problems if it does not support some services.
There is also an AWS GovCloud region and it
is for US Government organizations.
There is also a Secret Region (for US
Government Intelligence Organizations).
When we choose a region we need to take into consideration the
following:
- latency - proximity of the server to customer - costs
Each region has at least 2
availability zones. The zones are
physically isolated from eachother. So this provides
business continuity for our app. If one
availability zone goes down, the
infrasturcture in the other
availability zone will continue to operate.
The largest region namely
N. Virginia has 6
availability zones. The availability zones
are connected to eachother through a
high speed fiber optic network
Local Zones are located close to large
cities, industries and IT centerrs and can provide the lowest latency
witihn that specific area. For instance if our business is located in
Los Angeles, we will want to have the infrastructure located within that
local zone.
Local zones operate as an extension of an
AWS region and also have multiple
availability zone capability for high availability. There are over 100
edge locations that are used for the
CloudFront
content delivery network.
CloudFront
can cache content and distribute it on that edge location across the
globe for high speed delivery to the end users and it will do that with
low latency. It will also provide
protection against DDOS attacks.
AWS Wavelength deploys standard AWS compute and storage services to the edge of telecomunication carriers 5G networks. This enables developers to build applications that deliver ultra-low latencies to mobile devices because those applications will be run at the service provider's data center without having to traverse multiple hops across the internet to reach their final destination. This is great for connected vehicles that need low latency for environmental analysis or critical data, for interactive live video streams and live television broadcasts and also for real-time gaming on mobile devices.
AWS Ground Station
is a fully managed service that lets you control satellite
communications process that data that is received and scale your
operations. The way this works is the following:
Contacts which are simply reservations for
a ground station antenna are scheduled for when a specific satellite is
in proximity to that specific antenna. By doing this you can save up to
eighty percent on the cost of the ground station operations by paying
only for the actual antenna time that is used and relying upon the
global footprint of AWS Ground Stations to
download data when needed within proximity of that satellite.
Project Kuiper system which is a subsidiary of Amazon will be launching a proposed constellation of low earth orbit satellites, delivering high speed internet via high performance customer terminal antennas. This will provide high-speed internet to third-world countries (a similar initiative is the one from Starlink )
Cloud Computing
Cloud computing allows developers and IT departments the ability to focus on what matters most and avoid costs like server purchasing maintenance and ongoing capacity upgrades.
There are several different models and deployment strategies that have emerged to help meet the specific needs of these different users. Each type of cloud service and deployment method provides you with different levels of control, flexibility, and management.
Cloud Computing Models:
Infrastructure as a service
-contains the basic building blocks for cloud IT
-this is nuts and bolts stuff so if we want to launch a Linux server and we want to manage that server ourselves, that's how we would do that with infrastructure as a service and we would do that using the Elastic Cloud Compute - EC2
Platform as a Service - PaaS
- AWS takes a little bit more control over the infrastructure (AWS) manages their underlying infrastructure and the hardware and OS (e.g. a Relational DB - AWS provides the OS, the server and everything but we have to do the high level administration of the database. )
Software as a Service - SaaS
- this is a complete product that normally runs inside a browser and it refers to end-user applications (e.g. Office 365 or salesforce)
Serverless Computing
- allows us to build and run applications and services without thinking
about the servers
- also referred to as Function-as-a-Service (FaaS) or Abstracted
Services (e.g.
AWS Simple Storage Service - S3,
AWS Lambda ,
Dynamo DB
or
Amazon SNS
)
Cloud Computing Deployment Models
There are also different models for deploying cloud computing
Complete Cloud Deployment
- the application is fully deployed in the cloud and all parts of it run in the cloud
Hybrid Cloud Deployment
- resources on premise are interconnected with cloud based resources
*this will allow an existing on premises infrastructure to extend and
grow into the cloud
On Premises Deployment
- deploying resources completely on-premises using virtualization and
resource management tools such as VMware
- it is also known as private cloud
The Amazon management console is a web based interface to AWS
AWS resources can also be accessed through various SDKs for javascript, Java, Python, etc... there are also APIs for AWS. There is also a cli tool for connecting to AWS
AWS Pages
https://aws.amazon.com/certification/
This is the certifications page with information about all the existing
certifications
https://aws.amazon.com/whitepapers
This is the whitepapers page with whitepapers and technical
discussions/content authored by AWS
https://aws.amazon.com/products
Page describing all AWS Products
https://aws.amazon.com/new
Page describing all new AWS Products
AWS Storage services
Simple Service (S3)
- simple storage service designed to store any type of data
-it's a serverless service and we create an
S3 bucket
and store the data in it
- we upload objects into the bucket (and there's no limit to how much we
can store)
- cheapest AWS storage service used for long term archiving data
- data is not readily available as it is in S3 and we can set up rules
to migrate old data from S3 to AWS Glacier for long term archiving
Amazon Elastic Search Block Store (EBS)
- a highly available low latency block device service and it is used for attaching to servers that are launched with the Amazon EC2 service (Elastic Compute Cloud) similar to how we attach a drive to a computer at home
Amazon Elastic File System (EFS)
-network-attached storage (meaning multiple servers can access it similarly to how a NAS works on a network at home)
- provides hybrid storage between on premises environments and the AWS cloud
-is a portable data storage deice that can be used to migrate large amounts of data from on-premise environments over to the AWS cloud (data is downloaded on the Snowball device and then that is sent by courrier)
Usecase Examples
1 - Storage Example
In the white there, we've got the
AWS cloud. Now we can create a
VPC (Virtual Private Cloud)
inside that AWS cloud, and that VPC or
virtual private cloud is our own private space within the AWS cloud, and
that is an impenetrable fortress against attack, and no one will be able
to enter our own private space without us, allowing that to happen. Now
let's just say we launched two servers in
our VPC. Now we want these servers to
2 -Hybrid Storage Example
we've got on-site storage in a corporate data center, and we also want to have that stored in the AWS cloud on Amazon S3. Why would we do that? Well, it's great for a disaster recovery solution because we can still have high-speed access to our data in our corporate data center, and at the same time, we're taking advantage of the durability and availability of Amazon S3 as a disaster recovery solution, in the event that our on-site server goes down. So the first problem that we're going to encounter is that this corporate data center may have petabytes of data, and to transfer that via the internet to the AWS cloud is not going to be practical. It's just too much, and it's going to take too long. To solve this problem, AWS can send out to us, a snowball device, and that is a high-capacity device that can store petabytes of data. We can upload our data to that snowball device, and then we can send that back to AWS, and they will upload that for us into that Amazon S3 bucket. Then we've got to find a solution for making sure that the data in our corporate data center is synced with the S3 bucket. Now that's where the AWS Storage Gateway comes in, and that will orchestrate all of that syncing for us, and so if you've got a high-speed link between your corporate data center and the AWS cloud, which is what you can have with the AWS Direct Connect service, you can have the storage gateway orchestrate and manage all of that syncing for you. It will get your popular content, your content that is frequently accessed, and it will store copies of that on-site in your on-site storage, but at the same time, it will also store all of that data in the Amazon S3 bucket for you, then you've got the advantage of having all of the durability and availability of Amazon S3 as a disaster recovery solution, but at the same time, you've got high-speed access to your data which is also stored on the corporate data center.
Database Services in AWs
Amazon RDBMS is a fully-managed database service that makes it easy to launch database servers in the AWS cloud and scale them when required. The RDS service can launch servers for MySQL, including variations of the MySQL database engine with MariaDB and Amazon's own enterprise version of MySQL called Amazon Aurora. Standard PostgreSQL is also available and also available as Amazon's Enterprise Aurora PostgreSQL. Microsoft SQL server and Oracle are also available.
Dynamo DB is AWS's NoSQL database as a service. It's a serverless service like Amazon S3, and as such, you don't need to worry about the underlying infrastructure behind it. AWS takes care of everything for you, and it provides high speed, extremely low latency performance.
Amazon redshift is a fast, fully managed petabyte-scale data warehouse that is based upon the PostgreSQL database engine. If you're looking for a big data storage solution, Redshift is perfect for this.
Amazon Elastic Cache is an in-memory data store or cache in the cloud. It allows you to retrieve information from fast, fully managed in-memory caches, instead of relying for slower disk-based databases.
The AWS DB migration service orchestrates the migration of databases over to AWS easily and securely. It can also migrate data from one database engine type to another totally different database engine type. For example, you can use it to migrate from Oracle over to Amazon Aurora.
Amazon Neptune is a fast, reliable, fully-managed graph database service. It has a purpose-built, high-performance graph database engine optimized for storing billions of relationships and clearing the graph with millisecond latency.
DB usecase example
`we've got our corporate data center, and inside of our corporate data center, we've got an on-site Oracle database. Now, let's just say that that Oracle database it's old, it's worn out, it's outgrown its capacity, and it needs to be replaced. Now you've done a total cost of ownership analysis on the situation, and you've identified that it is far more cost-effective to host that database on the AWS cloud. So the first thing that you are going to want to do is to launch an RDS database instance. Now, let's say you want to further reduce your costs by not having to pay for the Oracle licensing fee, and what you can do is you can launch an Amazon Aurora database instance, and that will be running either the MySQL or the PostgreSQL open-source database engines and by doing that you're not going to be paying for a licensing fee. The disadvantage of that is that some of the fields of data that are located in that Oracle database may not be compatible with the Amazon Aurora MySQL or PostgreSQL fields, and what you're going to need to do is that when you take that data out of your Oracle database, you're going to have to change it and manipulate it to suit the Amazon Aurora database, and that is where the AWS database migration service comes in. You can define a database migration workflow by specifying the source database, the target database, and any operations on that data that need to occur during that migration of that data. Once you've done that, you can then run the database migration job, and that will look after everything for you. It will migrate that data from that on-site Oracle database to your Amazon Aurora database, and at the same time, it will be giving you feedback through the AWS management console as a dashboard on the performance of how that job is actually going, because that job could take hours, it could take days, it could take weeks depending on how big your Oracle database is, and how fast your connection to the AWS cloud is. And it will also give you feedback of any errors and alert you to any problems that may occur. Once our RDS instance is up and running and the data has been migrated over, then we can look at launching a web server that can receive traffic and requests from the outside world over the internet, and then get that data that is required from the RDS database, and then return that back to the requester. Now, let's just say we're getting a lot of requests from the outside world and a lot of that is for the same data. What we can look at doing is getting all of our regularly accessed content and putting it into an ElastiCache node, and because the ElastiCache node is serving those requests from its memory, not from a solid-state drive as is the case of the RDS service, it will be returning that very quickly, and it will be at a lower cost. Now, we need to take into consideration that the costs of storing data in memory is more expensive than storing that data on a solid-state drive, and so we need to make sure that ElastiCache node only contains regularly accessed data. So the way that we would do that is that requests would come in from the outside world to the web server. The web server could then check to see whether that data is in the ElastiCache node. If it is in the ElastiCache node, it will simply grab that data and then forward it back to that requester. Let's just say a request comes in and that data is not in the ElastiCache node, so then the web server will go to the RDS database instance, it will get that data if it's there, and after it has got that data it will then write that data into the ElastiCache node, and at the same time, it will define a time to live or a TTL for that specific data, and once that TTL has expired, if there are no further requests for that data within that TTL, then that data will be removed automatically by the ElastiCache service from the elastication node. And by doing that, all of that data that's in the elastication node will be regularly accessed data that has been accessed within that time to live period. Now, let's just say a request comes into that web server for either writing to or deleting from that RDS database, so the way that would work is that the request would come into the web server, the web server would then either delete or write to the RDS database. If it is deleting from the database, then it would also delete from the ElastiCache node as well. If it's writing to the database, then it would also write to the ElastiCache node, and again, it would define a time to live period, so that if that data is not requested within that time to lift period, then it will automatically be re removed from the ElastiCache node by the ElastiCache service.
AWS Compute Services
Amazon Elastic Compute Cloud , or EC2 for short, provides virtual servers in the AWS cloud. You can launch one or thousands of instances simultaneously and only pay for what you use. There's a broad range of instant types with varying compute and memory capabilities, and those will be optimized for different use cases. Amazon EC2 Autoscalingallows you to dynamically scale your Amazon EC2 capacity up or down automatically according to conditions that you define. It can scale up or down by launching or terminating instances based on demand. It can also perform health checks on those instances and replace them when they become unhealthy.
https://aws.amazon.com/lightsail/ it's the easiest way to launch virtual servers running applications in the AWS cloud. AWS will provision everything you need, including DNS management and storage, to get you up and running as quickly as possible.
Amazon Elastic Container Service , or ECS for short, is a highly scalable high-performance container management service for Docker containers. The containers, they will run on a managed cluster of EC2 instances.
AWS Lambda is a serverless service and lets you run code in the AWS cloud without having to worry about provisioning or managing that service. You just upload your code, and AWS takes care of everything for you.
Web server usecase (hosting with EC2)
Here we have the AWS cloud and our
Virtual Private Cloud or VPC located inside
that, and remember, a VPC is our own private space within the
AWS cloud, and no one can enter that unless
we allow them to enter it.
We can launch an EC2 instance, and that can
be running our web application, for example, WordPress, so what happens
if this single EC2 instance becomes
overwhelmed by demand? For example, we might have released a new
product, and our WordPress application cannot deliver the web pages
quickly enough to satisfy that. What we could do is that we could tear
down that instance and put in a bigger instance that could handle that
demand, and that is called
vertical scaling, and that used to be older
age, 10, 20 years ago. But the problem is that it takes time to do that,
and while we're doing that, our application is not running. And also,
what happens when the demand goes back down again? Do we have to tear
that down and then put in a smaller instance, and what happens if that
happens every day? What happens if that happens every hour? It's just
not going to be economical for us to do that. What we can do, is that we
can horizontally scale, and we do that by
adding more instances, and as demand goes up, we add more instances, and
as demand goes down, we terminate those instances, and that way, we
still have continuity of our application. Our Application will always be
running because there's always going to be at least one
EC2 instance to look after the demand. One
problem with this architecture is that it has
multiple endpoints for our web server, and
that's not practical because customers are not going to go to one
endpoint until that stops working and then go to another one and then
another one. It's just not going to work like that, and obviously, their
bookmarks in their browser are not going to be valid, so we need a way
of having one single endpoint for that web application that our customer
can go to and then having a way of distributing those requests to a
EC2 instance that is available. That is
where
Elastic Load Balancing
comes in, so it can receive traffic from our end users, and it will
distribute that traffic to an EC2 instance
that is available, so a request will come in, it will distribute it to
an available EC2 instance. Another request
will come in, and it will distribute it to a different
EC2 instance that is available, and it will
balance the load across those
EC2 instances, and if one of those
EC2 instances becomes unhealthy, it will
file a health check with the
Elastic Load Balancer, and then the
Elastic Load Balancer will no longer send
traffic to that unhealthy EC2 instance. But
what happens if that demand is only for a short period of time, for
example, half an hour? What do we do then? It's not going to be
practical for us to terminate instances when demand goes down and then
launch instances manually when that occurs. We can't do that every hour.
It's not going to be practical, and that's where the
Auto-Scaling service
comes in. It can launch EC2 instances
automatically when the demand on those instances increases, and it can
terminate automatically EC2 instances when
the demand on those instances goes down. It can also perform health
checks on those instances, and if one of those instances becomes
unhealthy for whatever reason, it can replace that instance with a
healthy instance, and it will do that automatically without you having
to do anything at all.
networking and Content Delivery
Amazon CloudFront
is a global content delivery network or
CDN for short, that securely delivers your
frequently requested content to over 100 edge locations across the
globe, and by doing this, it achieves low latency and high transfer
speeds for your end-users. It also provides protection against
DDoS
attacks.
Virtual Private Cloud
or VPC for short, lets you provision a
logically isolated section of the AWS cloud, and you can launch AWS
resources in that virtual network that you yourself define, and this is
your own personal private space within the AWS cloud, and no one can
enter it unless you allow them to enter it.
AWS Direct Connect
is a high speed dedicated network connection to AWS. Enterprises can use
it to establish a private connection to the AWS cloud in situations
where a standard internet connection won't be adequate.
AWS Elastic Load Balancing , or ELB for
short, automatically distributes incoming traffic for your application
across multiple EC2 instances and also in multiple availability zones,
so if one availability zone goes down, the traffic will still go to the
other availability zone, and your application will continue to deliver
responses to requests. It also allows you to achieve high availability
and fault tolerance by distributing traffic evenly amongst those
instances, and it can also bypass unhealthy instances.
Amazon Route 53
is a highly available and scalable domain name system or
DNS for short, and it can handle direct
traffic for your domain name and direct that traffic to your back-end
web server.
Amazon API gateway
is a fully managed service that makes it easy for developers to create
and deploy secure application programming interfaces or APIs at any
scale. It handles all of the tasks involved in accepting and processing
up to hundreds of thousands of concurrent API calls. It's a serverless
service, and as such, you don't need to worry about the underlying
infrastructure. AWS looks after everything for you.
Use case EC2 instances with CDN and custom domain
So let's have a look at an example of how we can use these networking
services of AWS. So here we've got the architecture that we looked at
before in the compute section, but one thing we didn't mention was
availability zones. So let's just say that
we've launched that architecture in a single availability zone. What
happens if that availability zone goes down? What happens to our
traffic? Our traffic has nowhere to go, and our application stops
delivering responses to requests. That is why it's always desirable to
have our architecture distributed across multiple
availability zones. That way, if one
availability zone goes down, the other one will continue to operate, and
the infrastructure within that other availability zone will continue to
respond to requests.
We can launch EC2 instances in multiple
availability zones, and our
Elastic Load Balancing service can
distribute that traffic across multiple
availability zones as well. So if one
availability zone goes down, the
Elastic Load Balancer will continue to
distribute traffic to the availability zone that is still healthy and to
those instances in that availability zone that are still healthy as
well. So let's just say our application running on these
EC2 instances
is a WordPress web server, and that contains lots of images and lots of
video that is static content. It's not really changing that much, and
it's not efficient for us to continue to keep delivering that from our
EC2 instances. We would like somewhere to
put that where it can be delivered with high speed and low latency and
to take the load off our EC2 instances.
That is where the CloudFront content
delivery network or CDN comes in, so we can get these large images and
large videos that are not really changing that often, and we can put
that in a cloud front distribution, and
CloudFront will cache that and distribute
that across hundreds of edge locations across the globe. So when your
end-user requests that video or those images, it will be delivered to
them with really high speed and low latency, and at the same time, it's
going to take the load off your
EC2 instances and is going to significantly
reduce your costs. At the same time, dynamic content that is changing
regularly, CloudFront can forward those
requests over to the Elastic Load Balancer,
which will then forward them to an
EC2 instance. So that way, you have the
best of both worlds, you have dynamic content delivered as a dynamic
content, and at the same time, you have these large videos and images
that aren't really changing that often delivered very rapidly. Now that
CloudFront service or that
CloudFront distribution will have its own
DNS name that we can put into a browser,
and we can directly access that. The problem with that is that
DNS name
for that CloudFront distribution will be
something very complicated and just won't mean anything to our end user
at all, so we would prefer to have our end-user type in a domain name
and have the request for that domain name forwarded to that
CloudFront service. As you can see here,
we've got example.com, and that is where Route 53 domain name service
can come in, so Route 53 will grab those requests for your domain,
example.com, and it will forward those requests over to the
CloudFront service, and the
CloudFront service will handle it from then
on.
2nd example Corporation architecture Example
So let's just say we work for a large enterprise that has its own
corporate data center, and the reason it's
got its own corporate data center is because that is located where the
employees work, and we don't want our employees to be slowed down by a
network. We want them to be able to work efficiently, but at the same
time, we have resources on the
AWS cloud that those employees also need to
access, so we need some way of having a high-speed connection between
ourcorporate data center and the
AWS cloud, and that is where the
AWS Direct Connect service comes in and
that can provide a very high-speed fiber-optic network between our
corporate data center and the
AWS cloud, and that is completely private.
Okay, so that's a very complicated architecture, and don't be too
concerned if that's very overwhelming because if you're going on to
become a cloud practitioner, you're not going to need to really be able
to produce this yourself. As an associate-level certification, that is a
different story. You'd be expected to create this yourself, but cloud
practitioner, you'll need to know what these services do. You'll need to
know that Route 53 will forward request for
your domain name to a back-end endpoint.
CloudFront will distribute your content to
hundreds of edge locations across the globe. An
AWS Management Services
CloudFormation
allows you to use a text file to define your infrastructure and to use
this text file to deploy resources on the
AWS cloud. This allows for defining of your
infrastructure as code, and you can manage your infrastructure with the
same version control tools that you can use to manage your code.
The
AWS Service Catalog
allows enterprises to catalog resources that can be deployed on the
AWS cloud. This allows an enterprise to
achieve common governance and
compliance for its IT resources by clearly
defining what is allowed to be deployed on the
AWS cloud.
AWS Cloudwatch
is a monitoring service for
AWS cloud resources and applications that
are deployed on the AWS cloud. It can be
used for triggering scaling operations, or it can also be used for
providing insight into your deployed resources.
AWS Systems Manager
provides a unified user interface that allows you to view operational
data from multiple AWS services and to automate tasks across those AWS
resources. That helps to shorten the time to detect and resolve any
operational problems.
AWS CloudTrail
monitors and logs AWS account activity, including
actions taken through the AWS management
console, the AWS software development kits, the command line tools, and
other AWS services, so this greatly sympathize security and analysis of
the activity of users of your account.
AWS OpsWorks
provides managed instances of chef and
puppet.
Chef and
puppet can be used to configure and
automate the deployment of AWS Resources.
AWS Trusted Advisor
is an online expert system that can analyze your AWS account and the
resources inside it, and then it can advise you on how to best achieve
high security and best performance from those resources.
Intro to Analytics and machine Learning
Amazon Elastic MapReduce
, or EMR for short, is
AWS Hadoop
framework as a service. You can also run other frameworks in
Amazon EMR
they integrate with Hadoop, such as
\
Apache Spark (analytics engine) ,
Apache Hive (data warehouse) ,
Apache HBase (noSQL DB) ,
PrestoDB (SQL DB)
and Apache Flink (SQL engine).
Data can be analyzed by
Amazon EMR
in several data stores, including
Amazon S3 and Amazon
DynamoDB .
Amazon Athena
allows you to analyze data stored in an
Amazon S3 bucket using your standard
SQL
statements.
Amazon FinSpace
is a petabyte scale data management, and analytics service purpose-built
for the financial services industry.
FinSpace
also includes a library of over 100 financial analysis functions.
Amazon Kinesis
allows you to collect, process, and analyze real real-time streaming
data.
Amazon QuickSight
is a business intelligence reporting tool. Similar to
Tableau , or if you're a java programmer, similar to
BERT , and it is fully managed by AWS.
Amazon CloudSearch
is a fully managed search engine service that supports up to 34
languages. It allows you to create search solutions for your website or
application.
Amazon OpenSearch
is a fully managed service for Elastic.co's
ElasticSearch
framework. This allows high-speed crawling and analysis of data that is
stored on AWS.
it was formally called
Amazon Elasticsearch.
Machine Learning Searvices
Amazon Deeplens
is a deep learning-enabled video camera. It has a deep learning software
development kit that allows you to create advanced vision system
applications.
Amazon SageMaker
is AWS flagship machine learning product. It allows you to build and
train your own machine learning models and then deploy them to the AWS
cloud and use them as a back end for your applications.
Amazon Rekognition
provides deep learning-based analysis of video and images.
Amazon Lex
allows you to build conversational chatbots. These can be used in many
applications, such as first-line support for customers.
Amazon Polly
provides natural-sounding text to speech.
Amazon Comprehend
can use deep learning to analyze text for insights and relationships.
This can be used for customer analysis or for advanced searching of
documents.
Amazon Translate can use machine learning to accurately translate text to a number of
different languages.
Amazon Transcribe
is an automatic speech recognition service that can analyze audio files
that are stored in Amazon S3 and then return the transcribed text.
Intro to Security, Identity and Compliance
Security, Identity and Complicante services are a very important category of AWS Services and there is a very broad selection of them.
AWS Artifact
is an online portal that provides access to
AWS security and compliance documentation,
and that documentation can be readily available when needed for auditing
and compliance purposes.
AWS Certificate Manager
issues SSL certificates for HTTPS communication with your website. It
integrates with AWS services such as Route 53 and CloudFront, and the
certificates that are provisioned through
AWS Certificate Manager
are completely free.
Amazon Cloud Directory
is a cloud-based directory service that can have hierarchies of data in
multiple dimensions. Unlike conventional
LDAP-based directory services
(LDAP-Lightweight Directory Access Protocol) that can only have a single
hierarchy.
AWS Directory Service
is a fully managed
Microsoft active directory service in the
AWS cloud.
AWS CloudHSM
is a dedicated hardware security module in the AWS cloud. This allows
you to achieve corporate and regulatory compliance while at the same
time greatly reducing your costs over using your own
HSM - Hardware Sercurity Module in your own
infrastructure.
Amazon Cognito
provides sign-in and sign-up capability for your web and mobile
applications. You can also integrate that sign-up process with external
OAuth providers such as
Google and
Facebook, and also
Saml 2.0 - Security Assertion Markup Language
providers as well.
AWS Identity and Access Management , or
IAM
for short, allows you to manage user access to your AWS services and
resources in your account. Users and groups of users have individual
permissions that allow or deny access to your resources.
AWS Organizations
provides policy-based management for multiple AWS accounts. This is
great for large organizations that have multiple accounts, and they want
to manage those and the users that use those accounts centrally.
Amazon Inspector
is an automated security assessment service. It can help to identify
vulnerabilities or areas of improvement within your AWS account.
AWS Key Management Service , or
KMS
for short, makes it easy to create and control encryption keys for your
encrypted data, and it also uses hardware security modules to secure
your keys. It's integrated well with AWS services such as
Amazon S3 ,
Resdshift, and EBS.
AWS Shieldprovides protection against distributed denial of service or DDoS, for
short, protection against DDoS attacks. The standard version of
AWS Shieldis implemented automatically on all AWS accounts.
Web Application Firewall , or
WAF
for short, is a web application firewall that sits in front of your
website to provide additional protection against common attacks such as
SQL injection and cross-side scripting. It has different sets of rules
that can be used for different applications.
Intro to Developer, Media, Mobile, Migration, Business, IoT
AWS Cloud9
is an integrated development environment running in the AWS cloud. It
allows you to deploy servers directly to AWS from an integrated
development environment. We'll be using
Cloud9
extensively if you go on to the developer associate pathway with
Backspace Academy.
AWS Codestar
makes it easy to develop and deploy applications to AWS. It can manage
the entire CI/CD pipeline for you. It has a
project management dashboard, including an integrated issue tracking
capability powered by
Atlassian Jira software.
AWS X-Ray
makes it easy to analyze and debug applications. This provides you with
a better insight to the performance of your application and the
underlying services that it relies upon.
AWS CodeCommit
is a git repository just like GitHub, and
it's running in the AWS cloud.
AWS CodePipeline
is a continuous integration and continuous delivery service, or
CI/CD for short. It can build, test, and
then deploy your code every time a code change occurs.
AWS CodeBuild
compiles your source code runs tests and then produces software packages
that are ready to deploy on AWS.
AWS CodeDeploy
is a service that automates software deployments to a variety of compute
services, including Amazon EC2,
AWS Lambda, and even instances that are
running on-premises. We'll be using
CodePipeline ,
CodeBuild , and
CodeDeploy
quite a bit. If you're going on to do the developer associate pathway
with Backspace Academy, we'll be creating a fully integrated
CI/CD pipeline that will automatically
package node npm packages and run tests using
Mocha
before deploying to an AWS environment.
AWS recently acquired a media production services company called
Elemental, and as a result, there are some
very high-quality broadcast media services available on AWS.
Elemental MediaConvert
is a file-based video transcoding service for converting video formats
for video-on-demand content.
MediaPackage
prepares video content for delivery over the internet. It can also
protect against piracy through the use of digital rights management.
MediaTailor
inserts individually targeted advertising into video streams. Viewers
receive streaming video with ads that are personalized for them.
AWS Elemental MediaLive
is a broadcast-grade live video processing service for creating video
streams for delivery to televisions and internet-connected devices.
Elemental MediaStore
is a storage service in the AWS cloud that is optimized for media. And
finally,
Amazon Kinesis
Video Streams streams video from connected devices through to the AWS
cloud for analytics machine learning and other processing applications.
Mobile Services
AWS Mobile Hub
allows you to easily configure your AWS services for mobile applications
in one place. It generates a cloud configuration file which stores
information about those configured services.
AWS Device Farm
is an app testing service for Android, iOS and web applications. It
allows you to test your app against a large collection of physical
devices in the AWS cloud. And finally,
AWS AppSync
is a GraphQL backend for mobile and web applications. If you're a
developer and you don't know what GraphQL is, then make sure you go out
and find out because it is absolutely revolutionizing the way we think
about data.
Migration services
AWS Application Discovery Service
gathers information about an enterprise's on-premises data centers to
help plan migration over to AWS. Data that is collected is retained in
an encrypted format in an
AWS Application Discovery Service
datastore.
AWS Database Migration Service
orchestrates the migration of databases over to the AWS cloud. You can
also migrate databases with one database engine type to another totally
different database engine type. For example, you can migrate from Oracle
over to AWS Aurora.
AWS Server Migration Service
can automate the migration of thousands of on-premise workloads over to
the AWS cloud. This reduces costs and minimizes the downtime for
migrations. AWS Snowball is a portable
petabyte-scale data storage device that can be used to migrate data from
on-premise environments over to the AWS cloud. You can download your
data to the Snowball device and then send it to AWS, who will then
upload that to a storage service for you.
Business & Productivity services
Amazon WorkDocs
is a secure, fully managed file collaboration and management service in
the AWS cloud. The web client allows you to view and provide feedback
for over 35 different file types, including Microsoft Office file types
and PDF.
Amazon WorkMail
is a secure managed business email and calendar service.
Amazon Chime
is an online meeting service in the AWS cloud. It is great for
businesses for online meetings, video conferencing, calls, chat, and to
share content both inside and outside of your organization.
Amazon WorkSpaces
is a fully managed secure desktop as a service. It can easily provision
streaming cloud-based Microsoft Windows desktops.
Amazon AppStream
is a fully managed secure application streaming service that allows you
to stream desktop applications from AWS to an HTML5 compatible web
browser. This is great for users who want access to their applications
from anywhere.
IoT Services
AWS IoT AWS IoT AWS IoT
is a managed cloud platform that lets embedded devices such as
Microcontrollers and Raspberry Pi to securely interact with cloud
applications and other devices.
Amazon FreeRTOS
is an operating system for microcontrollers such as the microchip PIC32
that allows small, low-cost, low-power devices to connect to AWS
Internet of Things.
AWS Greengrass
is a software that lets you run local AWS Lambda functions, and
messaging data caching sync, and machine learning applications on
AWS IoT
connected devices.AWS Greengrass extends AWS services to devices so they
can act locally on the data they generate while still using
cloud-based
capabilities.
AWS Gaming services
Amazon Gamelift
allows you to deploy, scale and manage your dedicated game servers in
the AWS cloud.
Amazon Lumberyard
(deprecated for now), you can see there we've got some images of some
pretty cool stuff. It's a game development environment and
cross-platform triple aaa game engine on the AWS cloud.
Highly Available and Fault Tolerant Architecture
Elastic Beanstalkis one of AWS's deployment services, and
it allows you to deploy your applications to complex architectures on
AWS, and it does this without you having to worry about the underlying
architecture that is behind that.
Elastic Beanstalk
looks after everything for you, and you just need to worry about writing
your code.
We'll also talk about how
Elastic Beanstalk
can create highly available and fault-tolerant architectures and what
that actually means, and then finally, we'll look at the different
deployment options that are available on
Elastic Beanstalk.
Elastic Beanstalkit.
It's been around for quite some time, was first launched in 2011. It
allows you to quickly deploy and manage applications on environments,
and those environments are launched for you without you having to worry
about how it all works. It'll automatically handle capacity
provisioning. It'll launch a load balancer for you, if you need that.
It'll take auto-scale for you, and it can also implement health
monitoring. So that if one of these instances that are launched becomes
unhealthy, it can replace those automatically for you. If you need to
change your code after you've deployed it, it's quite easy to upload new
versions of that code, and that can be done through the console or the
command-line interface, and also, it complete environments can also be
redeployed if need be. Your application that you're deploying could be a
Docker Container. It could be raw code. It could be Node.js, Java, .NET,
PHP, Ruby, Python, or Go.
You just supply your code and
Elastic Beanstalk
will deploy that for you, and it will provision that Node.js or whatever
environment for you automatically, or it could be a server such as
Apache, Nginx Passenger, or
IIS - Internet Information Service.
The
Elastic Beanstalk
process starts with us going through an application creation process,
where we will first off upload a version of our software or our code or
whatever it is, and then Elastic Beanstalk will launch an environment
and that will consist of EC2 instances, or it could be a single EC2
instance. It could be a multi-az environment, but we define that for
Elastic Beanstalk, and it will do that automatically for us. From there, we will have
our environment launched, and our code will be running on that
environment. Now, if we find that we need to deploy a new version of
that code, we can deploy that to that existing environment, or we can
create a whole new environment, it doesn't really matter, so if we
deployed it to our existing environment, then when that environment has
gone through that update process, and the new version is deployed and
running, then the environment will feedback to the application to notify
that that new version of your application is actually running. One of
the big advantages of
Elastic Beanstalk
is it can create a
So what is a
So here we've got the AWS cloud, and as we know, it's divided up into
regions, and those regions are divided up into availability zones, so if
we have our architecture distributed across multiple availability zones,
if one of those availability zones goes down, our infrastructure will
still continue to operate and serve requests.
Now our virtual private cloud that
will span the entire region, so it
will span multiple availability zones, and
so what we can do is that we can launch instances into both of those
availability zones, and that's going to give us high availability if one
of those availability zones goes down.
Now in order for our architecture to respond to spikes in demand or
increases in demand because an availability zone goes dow we can launch
our instances using an auto-scaling group,
so if demand on one of those or a group of instances increases, the
auto-scaling group will add instances to
accommodate that and vice versa if the demand goes down, we will reduce
our number of EC2 instances And that allows for
elasticity in our design.
And finally, to receive requests from the outside world and to
distribute those requests to those multiple instances, we're going to
need an Elastic Load Balancer to do that,
and that will also have the advantage of conducting health checks on our
instances, so that if communication breaks down between the
Elastic Load Balancer and our EC2 instance
then our auto-scaling group will
automatically add additional instances and that creates fault tolerance
in our architecture. Just the same as you've got a number of options
available for architecture that you're deploying to, such as a single
EC2 instance or a highly available and fault-tolerant architecture
across multiple availability zones, you've also got a number of
different deployment options that you can use, so for example,
if you've got 20 EC2
instances an all at once deployment
will deploy those 20 EC2 instances all at once. The downside of that is
going to be that while that is occurring, your architecture won't be
able to respond to requests, so that's obviously not a good thing.
So another option there is to do a
rolling deployment, and that will deploy
your application to a single batch at a time, so what that means is that
if you've got 20 EC2 instances, it can
deploy that to say two at a time, so you're
not going to be down by much, you're just going to be down from 20
instances down to 18 instances, but your architecture will still be
responding to requests. You can also do a
rolling with an additional batch, so what
that will do is if you've got again
20 EC2 instances, it will
temporarily increase to 22 while you're
doing those two deployments across those two EC2 instances, and that
way, you're still going to have your capacity at 20, which is what you
have designed for.
The other option there is an
immutable deployment, and that is a bit of
a variation of the all at once, so it's still doing an all at once
deployment across your 20 EC2 instances,
but while that's going on, it's going to deploy another
20 EC2 instances, so
temporarily you're going to have 40 EC2 instances, so it's going to double up a lot on your capacity, but through that
period where your environment is being deployed or your new version or
whatever is being deployed to that environment, you're not going to be
suffering any downtime.
And finally, we've got
blue-green deployments, and they will have
two environments that be running your application under the one
Elastic Beanstalk
application, and so what that is, you will have a
blue environment and a
green environment. One of those could be a
development environment, and the other one
could be your production environment. So
when you get to the stage where your development environment is ready to
go to be deployed, to deploy that, all you simply need to do is to
switch over from one environment to the other environment, and then your
old environment will then become your new development environment, and
so that is very straightforward with the
Elastic Beanstalk
because what it does. It will simply allow you to switch the domain
names for those two environments automatically for you, and so that
makes sure that your changeover doesn't involve any downtime for
returning of requests.
AWS Command Line Interface
We can connect to our AWS services and resources using a
command-line interface, so instead of
having to use the AWS management console as
we've done before, we can use text commands to achieve a lot of what we
would normally do with that graphical user interface.
We'll start off by looking at the back end service that makes this
happen, and that is the AWS application programming interface or API for
short. Then we'll look at the number of different command-line interface
applications that we can install on our computer that will allow remote
access to those services and resources. We'll also look at the
AWS Cloud9
service, and we'll discuss why I primarily use this for anything to do
with the command-line interface and the security concerns around not
using
AWS Cloud9 , and finally, we'll finish up by having a lab on using the
Cloud9
service with the command line interface.
When you're using the
AWS management console
like we've done in the past, AWS uses a application programming
interface to enable that communication between your remote computer and
the AWS services and resources, so how that works is that the AWS
management console that you've been using is simply an application that
is running on your browser, and it is sending HTTP calls backwards and
forwards to this application programming interface back end on AWS.
Now the documentation is available for the AWS API for many services,
for example, for the
S3 API , for the EC2 query API, but not for everything, so if you wanted to
create your own application and there wasn't a software development kit
for that language you're using. I can't imagine what language that would
be because there's certainly a very broad range of software development
kits that are available, but it is possible for you to send HTTP calls
to the API provided you have that that authentication done beforehand to
actually do that.
So it provides that back-end mechanism for that
communication, and it's utilized again by the
AWS management console, and we'll also use
it with the AWS command-line interface, so
that again is an application that's running on your remote computer that
will be sending HTTP calls to this API back end. Also there are a number
of software development kits that wrap the API up into libraries that
can be used with, for example, javascript, for PHP, and python and the
like, and so you don't have to actually know how to do these HTTP calls.
You just need to know how to use that software development kit, and the
documentation for that is, of course, brilliant, and many other AWS
services also use the API for communication within the AWS cloud.
API calls to AWS can only be made by authenticated users with valid
security credentials. For example, if you're using the management
console, you would have been authenticated through your username and
password. If you're using the command-line interface application on a
remote computer, then you would need to download an access key id and
secret and use that for authentication with AWS. If you're using an
application on your browser that has been developed using one of the
many AWS software development kits, then normally, you would be issued
with
IAM temporary credentials. So what that means is that this application that you have may use
login for Google, may use login for Facebook, or whatever, and it might
use your Google account or your Facebook account to authenticate you,
and then that will be issuing temporary credentials for you to access
the AWS resources through that browser-based application, and finally,
we can actually log all of these API calls using the AWS CloudTrail
service, so that's great. If we have any security issues or any
performance issues, we can go back through those
CloudTrail logs and make sure that there's
nothing untoward going on there.
A picture tells a thousand words, so how does this all work. Down the
bottom there, we've got our AWS cloud that
we want to connect to using a remote computer, and so that remote
computer will be sending HTTP API calls to the
AWS cloud to get information from the
AWS cloud, and to issue instructions to the
AWS services. So the first way we can do it there is we could have an
IAM user and that user will have a username
and password, and they can use that username and password to log in to
the AWS management console that is running inside of their browser, and
the AWS management console running on that remote computer will then
issue those API calls to the AWS cloud. The
second option there is that we could have an
IAM user download
IAM credentials in the form of an access
key and a secret to go with that access key, and so if that is presented
to the
AWS command-line interface application that
is running on that remote computer that will authenticate that
IAM user, and that
IAM user will then be able to issue
command-line interface commands to the AWS cloud.
And finally, if we've got an external user, so this user doesn't have an
AWS account, for example, you might have an application like dropbox for
example, and you have millions of users, and it's not practical, or it's
actually not even possible to create a million
IAM users, so you need to be able to
somehow authenticate those users and to allow those users to temporarily
access the AWS cloud, so you would use an application that is running
using the software development kits and
that application could authenticate you using an
OAuth authentication service, for example.
It could use the AWS Cognito service. It could use Google log in with
Google or log in with Facebook to authenticate you, and from that
authentication, you will have limited and temporary access through that
remote computer to the AWS cloud. Now to start using the command-line
interface, the first thing that you need to do is that you need to have
an application running on your computer that can allow that to happen,
so the AWS standard CLI application it's available for download for
Windows, Mac, and Linux, and it allows those API commands to be sent to
AWS using the windows command line or a Linux or Mac terminal
application. There is also the AWS shell application which is a
cross-platform standalone integrated shell environment that is written
in Python that can provide even more features and more automation
features to the CLI application and finally, we've also got the AWS
tools for windows PowerShell so you can run CLI commands within Windows
PowerShell and at the same time use all of those automation tools that
are available within PowerShell. Now, if you want to have a look at all
of those CLI tools that are available, just go to the AWS website,
aws.amazon.com/cli
AWS CloudShell
is a shell environment that is accessed through the AWS management
console. It has the AWS command-line interface application
pre-installed. Now this provides significantly increased security as
opposed when you use the command line interface application and run that
on your remote computer, because when you do that, you need to download
and use the IAM credentials. Now when you use
AWS CloudShell
you are simply logging into the management console. Now you could argue
that is just as insecure because you could lose your username and
password, but coming up further in the course, we're going to learn
about multi-factor authentication that we
can apply to our account so that our account cannot be accessed with
simply a username or password
AWS Cloud9 IDE
is an integrated development environment running on an EC2 instance, and
you access that through the AWS management console. It also has the AWS
CLI application pre-installed, and it also provides that increased
security because, again, the IAM credentials are not saved on a remote
computer. One advantage that it has over the simple
CloudShell
is that it also has a tree view of the file structure of that EC2
instance, so if you want to upload files and manipulate those files and
maybe put them into a S3 bucket or something like that and you want to
do that all with the command line interface, then you can quite simply
just do a drag and drop from your Windows Explorer or the Mac equivalent
of that. Just drag and drop over to the tree, and those files will be
automatically transferred over using SFTP.
AWS Business Case (when to use their services)
6 Advantages of Cloud Computing
AWS defines six advantages of Cloud
Computing.
The first one there is that we're going to be
trading a capital expense for a variable expense, so in the past, we would have had to put forward a capital
expenditure request to management, to purchase these servers, to have
them installed, to have them maintained, all of that sort of thing and
then by the time we've gone through that process, we may have to go back
and redo that all again because we've run out of capacity already. And
this way, we're going to be swapping that for a variable expense that is
going to be able to react according to our business needs.
Next, we're going
to benefit from the massive economies of scale
of using this enormous AWS cloud, and those costs that are associated
with that AWS cloud are shared amongst millions of users, and so we're
not going to be getting a big variation in these costs. It's going to be
quite stable over the long term.
Next, we can stop guessing capacity. We're
going to have an elastic infrastructure that can vary according to our
needs. We don't need to guess our capacity to purchase fixed assets. We
are going to be using a service that is going to be able to accommodate
our needs into the future.
Next, we are going to be
increasing our speed and agility to get our
services and products to market quickly. We can launch an infrastructure
on AWS within minutes, and we can be up and running in a very short
amount of time.
Next, we're going to
stop spending money on running and maintaining our data centers
on-premises. This is a big one because there are a lot of overhead costs
that we may not take into consideration when we're implementing an
on-premises solution, and that could be anything from insurance costs,
to security costs as far as physical security, it could be electricity,
a whole heap of things that go into maintaining and running that data
center.
And finally, we can
go global in minutes. The AWS cloud has
data centers across the globe, and we can launch within any part of the
globe within a very short amount of time.
4 key Values for Building a Migration Buisness Case
AWS also defines four key values to use when you're building a business case for migration over to AWS.AWS Pricing calculator
The
AWS pricing calculator
allows us to estimate the monthly and annual costs of using individual
AWS service. The first step after opening up the pricing calculator is
that we need to select the service that we're going to be using, and
there are a number of services there available. Pretty well, the vast
range of services that you can get on AWS will be available here on the
pricing calculator. After we've selected our service, and we can see
here, we've got the EC2 service up. We can start to define what we're
going to be using on that EC2 service, so we can define what the
instance type is, what the operating system is, and that sort of thing.
We've also got the option there of an advanced estimate, and that's
going to allow us to put more information. We'll get our estimate, and
we can see here, we've got the first 12 months is going to cost us one
and a half thousand US dollars. We're going to have a total upfront
expenditure there and a monthly cost. So we can save that, and we can
share it with other people. We can export it to Microsoft Excel as a CSV
file If we like, and so that is a very quick and easy way. If we know
what services we're going to be using and how we're going to use them.
It's a great way to get a good estimate of what our ongoing costs are
going to be. The
AWS Price List API. It allows you to
programmatically query for the prices of any available AWS services.
Using either JSON with the Price List service API, which is also known
as the query API or with HTML using the Price List API, which is also
known as the bulk API. It can also enable
There are
four main cost centers involved within the
total cost of ownership model. The first one there is
server costs, then
storage costs,
network costs, and finally the
Within storage again, we're going to have that hardware cost, but we're
also going to have a storage administrative costs. Are there backup
costs involved in that, do we need to have backup software involved
within that storage cost as well.
We also have network costs, so we're going to have our network, our
internet, our local era network, we're going to have load balancing, but
we're also going to have network administration costs involved as well.
And finally our IT labor cost for server administration, for
virtualization as well, so not only are we going to have people that are
going to be experts in managing the physical hardware, but we also have
may need to have people that are also experts in looking after VMware
solutions as well, and within those first three of server storage and
network costs, those physical costs. Running those are going to incur an
overhead cost as well. They're going to require space, they're going to
require somewhere to store those servers, they're going to have to have
power, and they're also going to need to be called as well, so we need
to make sure that those overhead costs are included as well.
The AWS Migration Evaluator,
which was formerly called TSO Logic, is a complimentary service to
create data-driven business cases for migration from on-premises data
centers over to the AWS cloud. The way it works is it will monitor your
existing on-premise systems, and it will collect that data, and it will
look at the cost involved in that, and it will automatically allow you
to build a very complex business case for migration over to AWS. The way
the
Migration Evaluator works is that a server
will be set up on your on-premise data center and that will collect
information from your systems, be it VMware, Hyper-V, whatever it may
be. It will collect data in real-time from those servers that you've got
on-premises, and then it will store that data that has been collected in
a MongoDB database, and that data will then be used to create these data
packages, and those will then be uploaded to the AWS cloud into an
Amazon S3 bucket. Once that's done, you can
go into the Migration Evaluator, and you
can fetch a report that will compare all of the costs, the total cost of
ownership of your existing on-premise solution based on this data that
has been collected, and any other information on costs that you have
been providing, and it will compare that to the cost involved of that
same solution on the AWS cloud. Another good tool that you can use in
the early stages of selling a migration solution to your senior
management is to use the
Cloud Adoption Readiness Tool or
CART for short, and this is a very quick
online survey where you can complete 16 questions, and then an
assessment report will be generated for you, and that will detail your
organization's readiness for a cloud migration across six perspectives,
and those have been business, people, process, platform, operations, and
security.
AWS, they define some best practices to achieve a successful cloud
migration.
The first one there, is to make sure that the stakeholders and your
senior management or senior leaders are aligned. Now, this may require you to inform and educate your senior
management on what your stakeholder requirements are, and making sure
that everyone understands what all of the key stakeholders need to get
out of this migration, and this is very important when you're dealing
with senior leaders who may not be part of the IT department. Getting
them to understand all of the stakeholder needs and what you're trying
to do to achieve this solution that provides that for them will ensure
that decisions are made quickly without any conflict.
The next one
there is to
set top-down quantifiable goals, and these
need to be clearly defined with clear objectives that can be measured
and are quantifiable. It's no point having something that is very
wishy-washy. You need to have something that is clear and direct and can
be quantifiable in the end, and then you can expand from that and
introduce more specific goals and more specific tasks that are involved
to achieve your end.
Next, we need to trust the process. AWS has
been in this business for a very long time, and they are the biggest
player around. And their processes for migration are the best, and that
involves assessing where you currently are, and where you want to get
to, and creating a mobilization plan to achieve that, then implementing
that migration and taking advantage of any opportunities to innovate and
modernize your architecture.
Next, within your migration process,
you need to make sure that you
choose the right migration pattern, and
there are seven Rs to achieve that.
So the first one there is to
refractor, and that involves the most
amount of effort, and that is going to completely redesign your entire
architecture and all of the underlying infrastructure that is involved
within that.
The next option there is to
re-platform. For example, going from, for
example, Windows Server to Linux going from Oracle database over to
Aurora database.
The next option there is to
re-purchase, so to do what you're currently
doing, but re-purchase, so if you've got a file server, simply
re-purchase an upgraded version of that file server. The next option
there is to re-host, and that is a lift and ship, so you're going to
look at what you've got existing there now and simply move that to
another location.
The next one there is to
relocate your virtual infrastructure, so
that could be on VMware or Hyper-V and allowing that to move from one
location to the other. It could be to AWS or could be to another
physical server, for example.
Next, we can simply retain what we've got
and do nothing, and finally, we can simply
retire the entire system. We don't need it
anymore, and we can get rid of it. When you're developing your migration
plan, it's a good idea to go to the
AWS prescriptive guidance website , and that's located at the aws.amazon.com/prescriptive-guidance, and
what that is, it's a portal of a whole heap of PDF documents that
contain time tested strategies, guides, and patterns from both AWS and
AWS partners that can help you accelerate cloud migration,
modernization, or optimization projects. So, for example, if you've got
a migration job where you want to migrate from Microsoft SQL Server to
the AWS cloud, what you can do is search for that on the prescriptive
guidance website. You will find no doubt, a PDF document that will
detail the process that you need to go through, and also any of the
issues that you may encounter, and that's going to help you to produce a
much better migration plan.
Another great way of reducing those costs of IT labor and resources is
to use an expert system such as the
The
AWS Compliance Program
covers a very broad range of certifications, laws and regulations, and
frameworks that AWS is compliance with or can help you to become
compliant with. For example there, we've got
ISO 9000, we've got the payment card
industry data security standard as well that AWS is compliant with,
we've got the HIPAA standard there that AWS
can help you to become compliant with as well. One thing that you need
to understand is that AWS may be compliant with a standard, or it may be
providing a compliance enabling service that can enable you to be
compliant with a standard. A good example of that for a compliant
service would be PCI DSS Level 1. AWS is compliant with that standard.
ISO 9001 again, AWS is fully compliant with that standard as well.
Another compliance enabling service that AWS provides is for the
HIPAA standard, and the reason that AWS
cannot provide you with a
HIPAA certification as such, is because the
HIPAA standard goes into much more than
just your back end services. For example, you may have a
HIPAA application, some software that you
have developed, and that is running on AWS, and the AWS side of things
is completely compliant, but your actual software may have issues, and
it may not be compliant, and so from that perspective, AWS has provided
everything that they can for you to enable compliance, but you still
need to do your end of it to to get that
HIPAA
certification.
AWS Artifact
is a central resource for compliance-related information on AWS. It
provides on-demand access to AWS's security and compliance reports and
also selected online agreements. Some of the reports you can download
include SOC or PCI reports and those are accessed quite simply by going
to the AWS management console, and selecting the report that you want or
searching for that report, and then selecting it, and then downloading
that report. Okay, so when you go to the management console and
select
AWS Artifact , you can go to the report section, you can search on a report, and so
here we have, we are searching for the PCI reports, and we can see
there, that we've got one. A PCI attestation of compliance or AOC
report. We simply select that, and we download that report. When we
click on download report, it will actually not download the report
itself, but it will download the non-disclosure agreement for
AWS Artifact
and attach to that. If you open it in Adobe Acrobat and you click on the
paper clip. You can see that attached to that will be the reports that
you want, and by clicking on those links, you will be able to see those
reports, and download those, or print them out, or do whatever you want
with those. It is one thing to be compliant with a standard at a point
in time, and it's another thing to be able to maintain compliance with
that standard when your infrastructure is changing, and your software is
being updated, and so that's where
AWS Config
comes in, and what it is, it's a configuration management service
running on AWS and allows you to assess, audit, and evaluate the
configurations of your AWS resources. It achieves this by continuously
monitoring and recording any changes in your configuration on AWS based
on pre-built rules. Now, those rules are supplied by AWS, but you can
modify those to suit you as well. Those rules can be applied to both
network and software configurations, so if you do an update to software
and that change is something to do with compliance based on those rules,
then that will you will be alerted to that change. Multiple rules can be
organized into a conformance pack to better organize these rules, and
any changes that appear can be identified quickly by simply going to the
AWS management console, and going to the cloud governance dashboard, and
seeing those changes. It has multi-account and multi-region data
integration, and so you can apply this across multiple accounts across
multiple regions. It is integrated with
AWS Organizations, so you can set up a
Conformance of PAC and apply that to all of your accounts within your
organization. There are a number of different support plans available
from AWS to help you out when you get into trouble that consists of a
basic developer business and the top of the line there is enterprise,
and they vary from free, and that is purely and simply, customer service
only. There's no technical service, so if you've got a problem with your
billing, for example, then you can get free support on that, but if you
want to get technical support, then you're going to have to pay for
that, and so developer is the, the base of that support plan for pay
plans and that's going to give you up to 12 hours response to critical
failures. Working up to enterprise, which is 24/7 technical support from
a senior engineer and that will have response to less than 15 minutes to
any critical failures. From my experience, personally with AWS support
is that these response times that they quote, for the most part, they do
deliver on, but quite often they don't deliver on so again, if you've
got an enterprise there and it's saying less than 15 minutes response,
it may take them a lot longer than 15 minutes to actually sort out your
problem, but it is a very good service to pay for, and certainly if
you're part of a large organization, you certainly should have at the
very least a business support plan. If you want to get more details
about these support plans, go to AWS website.
aws.amazon/premiumsupport/compare-plans If you have a difficult
application that you're trying to deploy on AWS, you may want to
consider using the
AWS Professional Services, and they are a global team of AWS experts. They work as a
collaboration between the client and an AWS Partner Network or an APN
partner and the
AWS Professional Services
team, so if you've got this large application that you want to deploy,
then you would engage an APN partner, and then the APN partner would
work with the AWS, Professional Services team to sort out all of those
issues for you. The Professional Services use a number of different
offerings that use a unique methodology based on Amazon's internal best
practices, and they help you to complete your projects faster and more
reliably. The Professional Services can provide experts in a specific
area of AWS, and they have global specialty practices that can support
your efforts in focused areas of the enterprise cloud computing. For
example, you might want to obtain the services of a machine learning
expert or an internet of things, or a specific database expert, and
AWS Professional Services
can make that happen for you. AWS Managed Services consist of AWS cloud
experts, and they can provide help with migrating, and operations
assistance, such as with monitoring of incidences, with security, with
patch management, and that sort of thing. These AMS cloud experts work
alongside AWS partners and also with your own operations teams. It
leverages a growing library of automations, configurations, and run
books for many use cases, and by doing that, it provides enhanced
security and cost optimization, because the AWS Managed Services are
going to make sure that you are operating with AWS best practices. If
ever there was a good reason for getting certified with AWS.
AWS IQ
for experts has to be the best. It's a marketplace for people that are
AWS certified, so you need to have an active AWS certification. It has
to be at the minimum of an associate-level, so it needs to be either an
associate professional or specialty certification, and it enables you to
get paid for work that you complete for customers within their AWS
account. Now it's not available in all countries. It was first rolled
out only in the U.S, and so to use that service, you need to have U.S
tax details and a U.S bank account, but it is currently being rolled out
to other countries as well. If you want to find out more about
AWS IQ
for experts, go to the aws.amazon.com/iq/experts How it works is that
you first create a profile, which will have all of your details, your
photo, your certifications, and qualifications. Once that profile is up,
that's going to enable you to connect with customers and communicate
directly with customers. If a customer is interested in anything that
you have to offer, then you can start a proposal and send that out to
the customer. If the customer accepts that proposal, you can then work
securely with that customer inside of their customer account with
limited IAM privileges. When the work is has been completed, then, of
course, you will get paid for that work, so it's a great way to get
started in AWS and start earning money from that very valuable
certification that you've gained.
AWS Architecture and compliance
AWS Architecture center provides reference architecture diagrams, vetted architecture solutions, Well-Architected best practices, patterns, icons, and more.
These examples are very complex and detailed, and most of these are in PDF format and define and describe the architecture for you to implement. Now some of these also have a GitHub repository, and that includes a CloudFormation template, and that will allow you to launch these very complex architectures yourself simply by clicking a button.
Wrodpress example
All the code here is deployed from a CloudFormation template
AWS also provides us with sample repos for the full setup: Repo Link
Cloudformation uses YAML templates to define the configuration of the infrastructure.
Now, up until now, we have been installing or launching WordPress
instances as a single instance, and that single instance contains the
file storage for all of our media files, being our pictures, our
documents, our videos, anything that we're going to be using within that
WordPress application and also that single instance will contain the
MySQL database as well. The problem that
creates when we go to an auto-scaling multi-instance environment is that
because the database and the file server are not centrally located, each
one of those instances will have a different copy or a different version
of that data, so when a new instance is launched, that will not have any
of the data that is in another instance, and so every time that a
customer comes through the load balancer and gets directed to an
instance, it will be totally different experience the next time they go
to the load balancer and get directed to a different instance, so we
need a way of centrally managing all of our media files and all of our
data.
So here we can see, we've got an
Amazon EFS Share
that has been created, and that will centrally cut locate all of that
media storage, and we have an EFS mount
target so those WordPress instances can access all of that data, all of
those media files that are in that
EFS share. And so that solves that problem
for our media files, then we look at our database, so we no longer have
the database located on our individual instances. The database is now on
Amazon Aurora, and it also has a read
replica which is going to help for durability and for speed for our read
requests. So now, all of a sudden, our data is also centrally located,
so any new instances that are launched, they will be using that same
data and the same media storage as well. So of course, to do that all
yourself is going to be quite difficult, so it's certainly something to
consider is to go to those, if you're going to install something like
this to search for it as a starting point at the very least and get
those best practices sorted out.
AWS Well architected framework
To just close out of this, and here we can see, we've got the
AWS Well-Architected Framework , and that helps you to build architecture around AWS best practices.
It provides a framework for you to work in. It's very high level. It's
very generic, and it's not prescriptive as such, but it does get you
thinking in the right areas. So let's have a look at that. So what it's
built around are five pillars of excellence being operational
excellence, security, reliability, performance efficiency, and cost
optimization.
So when we talk about operational excellence, we need to make sure that
we've got our processes designed like we would have in any other good
business process. We need to make sure that our architecture is
implemented as code. We're using
CloudFormation templates or something like
that. That defines our architecture and has version control around that
architecture. We're taking advantage of automation to streamline and to
reduce waste, just the same as we would with any other good business
process.
Security, making sure users are granted
least privilege. In other words, they only have access to the minimum
that they need. Implementing CloudTrail to
track user activity and CloudWatch to alert
us to any issues.
Creating a VPC architecture that is robust
and has multiple layers of security. Reliability, implementing a highly
available and fault-tolerant architecture that can respond to demand
both long-term demand and spikes in demand. Performance efficiency,
making sure that we're getting the most out of those resources. They're
not sitting around doing nothing. We don't have EC2 instances and EBS
volumes that we're paying for, and we're not using, and finally, we've
got Cost Optimization, making sure that we
get the right solution for our budget. It's no point designing something
that's massive and complicated and expensive if it's just not going to
be economical to do that. We might be able to go to a lower-cost
solution, such as designing a static website using
CloudFront, rather than having large
servers that cost a great deal of money. Now to help us along the way,
there's actually an
AWS Well Architected Tool , and there's a link to it up here. So i'll just open that up now. How
it works, is again, we go through those five pillars of excellence, and
it will ask you a series of questions, and it's a bit like a
benchmarking process. It helps you to identify where you are now and
what you need to do to achieve AWS best practices. You define your
workload first, and then you'll go through, and you'll answer all of
these questions relating to the five pillars. Once you've done that, you
can save that as a milestone, and then you can come back when you fixed
up all of the issues that have been identified And then go back and redo
the architected tool, put in another milestone until you get to the
point where you're completely satisfied, that you've taken into
consideration everything of this Well-Architected Framework and that
you're satisfied that you're achieving AWS best practices, and then you
can print out a report, and that is great for going to clients and
saying well look, this is what we've taken into consideration with your
architecture, and we've come up with a solution that is based upon AWS
best practices on a number of different areas, and this is what we have
come up with, so it's not only a great design tool. It's also a great
marketing tool as well.
Okay, so we've got these great tools.