AWS Solution Architect - Associate Exam

15 Jun 2019

Been a long time since I’ve posted!!

I have been busy with… studying for my AWS exam hahahahaha

Can’t believe I’m taking it - will update with results hopefully in a couple of weeks’ time!!

In the meantime… here are my notes (extremely messy but it might hopefully be of help? lol)

For those who are interested: you can check out the below courses on Udemy, extremely helpful!

Just listen to the entire video series by Ryan, and then do all the practice papers like, until you can get like 90%.

Good luck to everyone!!!


UPDATE: HAHAHAHAHA SOMEHOW MANAGED TO SCRAP A PASS!!!! pretty insane lol the test was so difficult it’s not even funny.


IAM

AWS Cognito

With Amazon Cognito, your users can sign in through social identity providers such as Google, Facebook, and Amazon, and through enterprise identity providers such as Microsoft Active Directory via SAML.


S3

Capacity

Characteristics

Redundancy

Cross-Region Replication

Versioning

Cross Account Access

S3 Standard

S3 Infrequently Accessed

S3 One Zone IA

S3 Intelligent Tiering

S3 Glacier

S3 Deep Glacier

S3 RRS (Reduced Redundancy Storage)

Storage Gateways


CloudFront


EC2

Amazon EC2 bare metal instances provide your applications with direct access to the Intel® Xeon® Scalable processor and memory resources of the underlying server. These instances are ideal for workloads that require access to the hardware feature set (such as Intel® VT-x), for applications that need to run in non-virtualized environments for licensing or support requirements, or for customers who wish to use their own hypervisor.

Placement groups

Uptime SLA for EC2 and EBS

99.95%

Elastic IP

VM Export/Import

VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment.

Windows Server Licenses

A dedicated host is required to use your existing Windows Server licenses.

Virtualization Types

Each instance type supports one or both of the following types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The virtualization type of your instance is determined by the AMI that you use to launch it.

Linux Amazon Machine Images use one of two types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The main differences between PV and HVM AMIs are the way in which they boot and whether they can take advantage of special hardware extensions (CPU, network, and storage) for better performance.

Types of EC2

I: IOPS

H: High Disk throughput

R: RAM

M: Main choice for general purpose

T: Cheap general purpose

C: Compute

P: Pics (for graphics)

X: Extreme memory (for databases) - X1e was created to run high performance databases

Z: Extreme memory and compute

D: Density (hadoop)

Optimising

Monitoring

Memory metrics not automatically collected. Report to cloudwatch for tracking.

Auto-scaling

Auto Scaling can automatically maintain desired capacity and replace unhealthy instances

Instance in AZ with most instances will terminate first - with oldest launch configuration.

Moving EC2 volumes

AWS does not copy launch permissions, user-defined tags, or Amazon S3 bucket permissions from the source AMI to the new AMI; these need to be manually applied.

Change instance families

EBS or Instance Store

EBS Backed Volume: Can be stopped. Can be rebooted. Deleted on default, but can select option to keep the root device volume.

If an Amazon EBS volume is an additional partition (ie not the root volume), can I detach it without stopping the instance? Yes, although it might take some time.

Instance Store: Ephemeral. If underlying host fails, data is lost. They cannot be stopped! You can reboot it. Deleted on termination by default.

Encrypting Root Device Volume

1) Create snapshot of unencrypted root device volume

2) Encrypt the snapshot when creating a copy

3) Create an AMI from the encrypted snapshot

4) Launch the new encrypted instance using the AMI

On-demand

Purchased at a fixed rate per hour. AWS recommends using On-Demand Instances for applications with short-term, irregular workloads that cannot be interrupted.

Spot

Use when you can be flexible about when applications run and can be interrupted.

Reserved Instances

Suited if consistent, heavy, predictable usage. Pay for the entire term regardless of the usage

Get public hostname of EC2

curl http://169.254.169.254/latest/meta-data/public-hostname

Get public and private IP address of EC2

curl http://169.254.169.254/latest/meta-data/

When an EC2 instance with an associated Elastic IP is stopped and restarted, the instance will restart on a different physical host, and all instance-store data will be lost.

There are brief, strong, and entirely unpredictable spikes in traffic that overwhelm your EC2 instances’ resources and freeze the application. The cost-effective solution to the unpredictable spike in traffic is to use SQS to decouple the application components. Pre-warming an ELB signifies that these spikes in traffic are predictable.


EBS Volumes

Maximum volume size of a single EBS volume is 16 TiB. If you need more EBS storage, one option is to stripe across multiple EBS volumes in RAID 0 configuration. RAID 1 offers redundancy through mirroring, i.e., data is written identically to two drives. RAID 0 offers no redundancy and instead uses striping, i.e., data is split across all the drives.

RAID 5 is a redundant array of independent disks configuration that uses disk striping with parity. Because data and parity are striped evenly across all of the disks, no single disk is a bottleneck. Striping also allows users to reconstruct data in case of a disk failure.

For best peformance, use provisioned IOPS (PIOPS) in a RAID 0, which will give you better performance than RAID 5 as you are now writing parity to your partition.

Taking snapshots

For a consistent snapshot of an EBS Volume:

Change encryption for EBS volume


ECS (Elastic Container Service)

Roles


Load Balancer

Application Load Balancer

Classic Load Balancer

Network Load Balancer

Sticky Session When Sticky session is enabled, request from a client are routed to the same server. However, be aware that it can cause unpredictable behavior when traffic pattern shifts very quickly. Newly added capacity may not be used to handle existing user workload. When sticky session is disabled, requests are evenly distributed across all available instances

To get requestor IP address

To load balance TCP

Public vs Private IP address

When you launch an instance in a default VPC, we assign it a public IP address by default. When you launch an instance into a nondefault VPC, the subnet has an attribute that determines whether instances launched into that subnet receive a public IP address from the public IPv4 address pool. By default, we don’t assign a public IP address to instances launched in a nondefault subnet.

Public IP VS Elastic IP

Security Groups

When creating a new security group, all in bound traffic is NOT allowed by default.

When I create a new security group, all outbound traffic is allowed by default.

Cross Zone Load Balancing

If cross-zone load balancing is enabled, the load balancer distributes traffic evenly across all registered instances in all enabled Availability Zones. In this case Availability Zone A has one instance and B has three instances. Each instance is receiving 25% of the traffic. When cross zone load balancing is disabled, each availability zone would get 50% of the traffic.


Monitoring

With the Resource Groups tool, you use a single page to view and manage your resources.

AWS CloudTrail

AWS CloudTrail records AWS Management Console actions and API calls. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred.

AWS CloudWatch

AWS Config

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources.

System status check

Identify AWS infrastructure related issues. physical host, network connectivity to host, system power.

Instance status check

Also fails if system status check fails.


Security

Encryption in Transit

Encryption at Rest

Type of CMK Can view Can manage Used only for my AWS account Customer managed CMK Yes Yes Yes AWS managed CMK Yes No Yes AWS owned CMK No No No

IDS/IPS

To protect EC2:


AWS Simple Queue Service (SQS)

Long VS Short Polling

AWS Simple Notification Service (SNS)

AWS Simple Email Service (SES)

Amazon Simple Email Service (Amazon SES) is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails. It is a reliable, cost-effective service for businesses of all sizes that use email to keep in contact with their customers.

AWS Software Development Kit (SDK)

A collection of software tools for the development of applications

Prove identity with AWS SES & ISP (Internet Service Provider) when sending emails Sender Policy Framework (SPF) is for identifying email servers that are authorized to send emails on your domain’s behalf. This information is specified as part of your DNS resource records. A recipient can query DNS service to cross check the server name and detect if somebody is spoofing your email address

DomainKeys Identified Mail (DKIM) is for protecting your email messages from tampering. It is done using digital signing and your public key needs to be listed as part of your DNS resource records. Recipient can query DNS service to get public key and cross check the signature. Best practice is to use both these methods

AWS Simple WorkFlow Service (SWF)

If your app’s steps take more than 500 milliseconds to complete, you need to track the state of processing, and you need to recover or retry if a task fails, Amazon SWF can help you.

Use when failures need to be detected and handled through Amazon SWF’s cloud workflow management.

Elastic Transcoder

Convert (or “transcode”) media files from their source format into versions that will playback on devices like smartphones, tablets and PCs.

API Gateway

Kinesis


Serverless

On-Premises: Get your own physical servers

Infrastructure As A Service (IaaS): EC2 - Servers provided with just an API call

Platform As A Service (PaaS): Elastic Beanstalk - entire process provided

Software As A Service (SaaS): Wordpress - just use the software

Containers: Docker - provide the container and everything else is settled (BUT still need to manage the containers using Kubernetes)

Functions As A Service (FaaS) aka Severless: Lambda - provide your code and everything else is handled.

AWS Lambda

Lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running.

Lambda functions, by default, are allowed access to internet resources. To access databases and other resources in your VPC, you need to configure Lambda function to run inside the context of a private subnet in your VPC. When this is done: your lambda function gets a private IP address and can reach resources in your VPC. In this mode, it can access internet services only if private subnet has a route to a NAT device

There is an upper limit on number of concurrent lambda function executions for your account in each region. You can optionally specify concurrent execution limit at a function level to prevent too many concurrent executions. In this case, lambda executions that exceed the limit are throttled. When synchronously invoked, caller is responsible for retries. When asynchronously invoked, Lambda service automatically retry twice. You can configure Dead Lead Queue where failed events can be stored. S3 invokes Lambda asynchronously and unit of concurrency is the number of configured events

Lambda support versioning and you can maintain one or more versions of your lambda function. Each lambda function has a unique ARN. Lambda also supports Alias for each of your functions. Lambda alias is a pointer to a specific lambda function version. Alias enables you to promote new lambda function versions to production and if you need to rollback a function, you can simply update the alias to point to the desired version. Event source needs to use Alias ARN for invoking the lambda function.

Lambda cannot listen on input ports. Use API gateway to listen, receive then invoke Lambda.

Lambda may incur a startup delay if functions are invoked after a long period of idle.

With Lambda, you have to choose amount of memory needed to execute your function. Based on the memory configuration, proportional CPU capacity is allocated.

Redshift

CloudFormation

Deploy resources at scale, completely scripting your cloud environment.

Use to build a reproducible, version-controlled infrastructure.

Scripted in JSON or YAML

By default, CloudFormation ensures all or nothing deployment. If there is an error at any step and CloudFormation is not able to proceed, then it will remove all AWS resources in a stack that were created by CloudFormation

You can use GetAtt function to query the value of an attribute from a resource in the template.

CloudFormation does not check for account limits. So, it is possible that your stack creation may fail if it exceeds account limits

A company has network team that is responsible for creating and managing VPCs, Subnets, Security Groups and so forth. Application teams are required to use these existing VPCs, Security Groups for their application instances. In CloudFormation, what capability can you use to refer to these common resources that were created in other stacks: Cross Stack References.

Common resources can be managed using a separate stack. Other stacks can simply refer to the existing resources using cross-stack references. This allows independent teams to be responsible for their resources. When creating a template, you can indicate what resources are available for cross stack references by exporting those values (Export output field). Other stacks can use Fn::ImportValue function to import the value.

Nested Stacks are used for common templates for creating the same type of resources across multiple stacks. For example: elastic load balancer required by each application

Elastic Beanstalk

You can provision your Elastic beanstalk resources in an existing VPC

Just upload the application - Elastic Beanstalk handles everything else.

proactive Cycle Scaling: automatically start up and shutdown vms during peak periods

proactive Event-Based Scaling: Can automatiically scale in anticapation peaks caused by certain events. E.g. black-friday, boxing day, half price sale days.

Releasing new version of your application software: You can upgrade an existing environment or create a brand new environment for the application version.

Rolling deployment – Updates a batch of instances. Each batch is taken out of service and available capacity is reduced by the number of instances in the batch. All at once deploys new version to all instances simultaneously. Instances are out of service for a short period. Rolling with additional batch – Launches additional batch of instances to maintain full capacity during deployment. It deploys version in batches. Immutable – Deploys new version to a fresh set of instances

Swap Environment URL option in Elastic Beanstalk is convenient for handling blue/green deployment scenarios.

Deploying RDS instances with Elastic Beanstalk is not recommended. When you delete an environment, you will lose the database. In addition, deploying database with application forces you to rev both at the same time. This is not recommended for production as you need flexibility to update database and application at their own cadence

To store RDS Database Backups for a period 5 years, take periodic snapshots.

When you initiate a point-in-time recovery, transaction logs are applied to the most appropriate daily backup in order to restore your DB instance to the specific time you requested. You can initiate a point-in-time restore and specify any second during your retention period, up to the Latest Restorable Time

Code Pipeline

Source Action is monitoring source code control system like github or AWS CodeCommit or S3 versioned bucket and trigger automatic pipeline invocation when new version of code is available.

Build Action is for creating software binaries from source code.

Test action is used for running tests against your code.

Deploy Action is used for deploying your code using variety of deployment providers like CloudFormation, CodeDeploy, ECS, Elastic Beanstalk and more.

Approval Action is used for manual approvals of a stage in a pipeline.

Invoke Action is used for performing custom action


Databases

Overloaded database:

RDS Read replica is created based on asynchronous replication technology and does not impact primary db transactions. Read-replica may see a backlog build up if there are momentary interruptions

Scale out is NOT supported in RDS/MySql for increasing write throughput.

RDS (OLTP - Online Transaction Processing)

DynamoDB (OLAP - Online Analytical Processing/ NoSQL). It has to scan the entire table if no index/secondary index for search criteria.

Both DynamoDB and ElastiCache provide high performance storage of key-value pairs. Storage Gateway is a storage service, but it is a hybrid storage service that enables on-premises applications to use cloud storage.

DynamoDB provides consistent, single-digit millisecond latency at any scale. ElastiCache provides sub-millisecond latency to power real-time applications.

Amazon RDS does not currently support increasing storage on a SQL Server Db instance

In RDS, changes to the backup window take effect immediately.

In RDS what is the maximum size for a Microsoft SQL Server DB Instance with SQL Server Express edition: 10Gb

You can conduct your own vulnerability scans within your own VPC without alerting AWS first? No.

DynamoDB

DynamoDB automatically scales throughput capacity to meet workload demands, and partitions and repartitions your data as your table size grows. Also, DynamoDB synchronously replicates data across three facilities in an AWS Region, giving you high availability and data durability. DynamoDB is automatically redundant across multiple availability zones.

Eventually consistent reads (the default) – The eventual consistency option maximizes your read throughput.

A strongly consistent read (option to choose) returns a result that reflects all writes that received a successful response before the read.

You are a consultant planning to deploy DynamoDB across three AZs. Your lead DBA is concerned about data consistency. Which of the following do you advise the lead DBA to do?

Streams record DynamoDB item changes in order. Lambda configured to poll and update ElastiCache provides a convenient mechanism to update the cache

Capacity Reservation allows you to obtain discounts on DynamoDB provisioned throughput capacity. This requires 1 year or 3 year commitment and applies for a REGION for which the capacity was purchased.

DynamoDB Global Tables are designed for massively scaled multi-master replication across AWS regions. This takes care of automatically replicating changes happening in the table that are happening in different regions. You can use this to provide low latency access to data irrespective where the user is located. S3 Cross Region Replication is meant for one way synchronization between two S3 regions. It is not designed for two-way or multi-way replication. ElastiCache is region specific service and does not perform automatic replication; you would end up writing logic for replicating data.

The maximum item size in DynamoDB is 400 KB (combined value and name).

RDS

DB Instance: database environment in the cloud with the compute and storage resources you specify

RDS manages the setting up: provisioning the infrastructure capacity; installing the database software.

RDS automates common administrative tasks: performing backups (1 day retention period by default, maximum of 35 days) and patching the software that powers your database.

With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover.

For multi-AZ high availability, RDS uses synchronous replication between primary and standby systems. If standby is slow, transactions will take longer to complete. RDS Read Replica on the other hand uses asynchronous replication and any slowness in Read Replica instance would simply cause data lag in the read - replica. Transactions in primary is not impacted

Native database access: you’re still responsible for managing the database settings, building the relational schema, responsible for any performance tuning to optimize your database for your application’s workflow.

40 RDS DB Instances by default.

Within one instance, up to 100 for SQL Server, 1 for Oracle (no limit on schemas). Others, no limit.

To move an RDS instance: Take a snapshot of the RDS instance and create it inside your VPC.

Automatic backups are deleted when instance is deleted. A final snapshot may have been created if ‘SkipFinalSnapshot’ is selected.

Upgrade to larger instance class: to minimise disruption, schedule during least customer period; downtime is a couple of minutes.

RDS VS relational database AMI Amazon RDS: offloads database administration. Relational database AMIs on EC2: manage your own relational database in the cloud.

Automatic Failover

Use Multi-AZ (for disaster recovery! not for improving performance).

Use Multi-AZ to take backups from IO intensive database so IO activity is not suspended (backup taken from the standby)

For performance improvement, use Read Replicas. Max 5 read replicas? Asynchronous replication. Must have automatic backups turned on to deploy a read replica.

Cannot have multi-AZ copy of your read replica.

Bottlenecks

After Elasticache, read replicas, cloudfront and S3 for caching - implement database partitioning and spread data across multiple DB instances.

Encrypting

To encrypt an existing database, create a new instance with encryption enabled and migrate data into it.


VPC

Once a VPC is set to Dedicated hosting, it is not possible to change the VPC or the instances to Default hosting. You must re-create the VPC.

You work for a automotive company which is migrating their production environment in to AWS. The company has 4 separate segments, Dev, Test, UAT & Production. They require each segment to be logically isolated from each other. What VPC configuration should you recommend? Deploy a separate VPC for each segment, completely isolating that segment from the other segments.

how many VPCs can you have per region in your AWS account? 5.

VPC Endpoint

Elastic Network Interface (ENI) Every instance in a VPC has a default network interface, called the primary network interface (eth0). You cannot detach a primary network interface from an instance. You can create and attach additional network interfaces.

ENIs can be used in the following scenarios:

Traffic Control A security group can grant access to traffic from the allowed networks via the CIDR range for each network.

By default all subnets will be able to communicate with each other using the main route table.

Network Address Translation (NAT): used to allow internet traffic to private subnet.

NAT instance: need to disable source and destination checks.

NAT Gateways: multiple NAT gateways across Availability Zones so it is not a single point of failure.

VPC endpoint: talk straight to S3 (for e.g. without going through the internet. uses AWS network)

Assign elastic IP to the instance to provide internet access.

Virtual Private Gateway: assign a public ip address to VPG to allow for site-to-site VPN connection.

An Amazon VPC VPN connection links your data center (or network) to your Amazon VPC virtual private cloud (VPC). A customer gateway is the anchor on your side of that connection. It can be a physical or software appliance. The anchor on the AWS side of the VPN connection is called a virtual private gateway.

Create a VPC: 1) Create a VPC (max is /16) 2) Security Group, Route Table, Network ACL, are created by default. 3) Create subnets (10.0.1.0/24, 10.0.2.0/24). Change one to automatically assign IP addresses, that will be the public subnet. One subnet one availability zone!!! 4) Create one internet gateway (one per VPC) 5) Keep main route table private! Because every subnet by default is allocated to the main route table. Have a separate table for public subnet. Create route from public route table (for IPv4 0.0.0.0/0 and IPv6 ::/0) to internet gateway.

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.

A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).

Edge-to-edge routing is not allowed through a VPN connection.

A bastion host sits in a public subnet, and serves as a secure gateway through which one SSHes into instances in a private subnet.

After setting up a VPC peering connection between your VPC and that of your clients, the client requests to be able to send traffic between instances in the peered VPCs using private IP addresses. If a route is added to your Route Table, your client will have access to your instance via private IP address.

A placement group may not span paired VPCs or multiple Regions. Placement Groups are limited to a single AZ.

The online application must be in public subnets to allow access from clients’ browsers. The database cluster must be in private subnets to meet the requirement that there be no access from the Internet. A NAT Gateway is required to give the database cluster the ability to download patches from the Internet. NAT Gateways must be deployed in public subnets.

ELB nodes are deployed in your VPC subnet. Lambda functions when configured to access your VPC private resources will use up addresses in the assigned subnet. EC2 instances are assigned address from your subnet

Bastion Host with a single well-known access point is the recommended option and you can let your customers access the EC2 instances using private DNS name or private IP Addresses. Bastion Host also improves security posture as it reduces attack surface by keeping your EC2 instances in private subnet. You can tighten instances’ security group to allow access only from Bastion Host security group. Private DNS Name and Private IP Address remains attached to the instance until the instance is terminated.


CNames cannot be used on naked domain names (without www.

Each /8 block contains 2^24 = 16,777,216 addresses.

/28 - smallest possible subnet in AWS VPC.

Size of SSD volumes: 1 GiB - 16 TiB

Elastic Map Reduce (EMR)

Scalable and Reliable Solution

AWS Server Migration

While an SQS queue can be an important part of a decoupled web application, it is not required when hosting a highly available static website on EC2. An auto scaling group configured to deploy EC2 instances in multiple subnets located in multiple availability zones allows an application to remain online despite an instance or AZ failure.

Auto scaling is not really intended to respond to instantaneous spikes in traffic, as it will take some time to spin-up the instances that will handle the additional traffic. For sudden traffic spikes, make sure your application issues a 503 - Service Unavailable message.

The pillars of the AWS Well Architected Framework are Security, Reliability, Performance Efficiency, and Cost Optimization.

DynamoDB and Amazon RDS are managed services. As such, AWS handles the ongoing maintenance.

write cron job that uses AWS CLI to take snapshots of EBS volume. The data from an EBS volume snapshot is durable because EBS snapshots are stored on the Amazon S3-Standard.

Access to the underlying operating system is granted for Elastic Map Reduce and Elastic Beanstalk. The others are managed services.

A team is building an application that must store persistent JSON data and be able to have an index. Data access must remain consistent if there is high traffic volume. Use DynamoDB.

A unit of Read Capacity enables you to perform one strongly consistent read per second (or two eventually consistent reads per second) of items of up to 4KB in size.

A unit of Write Capacity enables you to perform one write per second for items of up to 1KB in size.

EFS

SSH: port 22 HTTP: port 80 HTTPS: port 443 FTP: port 21 MySQL: 3306 RDP: 3389 SQL Server: 1433

It is necessary to set up the bi-directional network permissions, normally with Security Groups. You will connect the EFS Target to your EC2 instance with a ‘mount’ statement. You do not need to stipulate the size or format the volume. AWS provide a nominally unlimited file system ready for you to use. As normal under the shared security model AWS will ensure that the EFS system is secure, but you are responsible for the access control security inside the EFS file space provided to you.

Security groups are stateful, and also consolidate rules.

ACL is NOT stateful (have to configure both in and out), and also goes in order of rule number.

When editing permissions (policies and ACLs), to whom does the concept of the “Owner” refer? The “Owner” refers to the identity and email address used to create the account AWS account.​​

For software licenses tied to physical cores and sockets, use dedicated hosts or bare metal instances.


Elastic MapReduce

Master node controls and directs cluster (ends cluster if terminated)

Core node processes and store data using HDFS (risk of data loss if terminated)

Task nodes process data but do not hold persistent data - add spot capacity here.

Route 53 Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.

Amazon Route 53 does not have a default TTL for any record type

An application uses Geo Location Based Routing on Route 53.
Route 53 receives a DNS Query and it is unable to detect requester’s Geo location. Default location is returned if default record is configured. otherwise, no answer response is returned. Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.

When using Alias Resource Record Set, Amazon Route 53 uses the CloudFront, Elastic Beanstalk, Elastic Load Balancing, or Amazon S3 TTLs

Health Check needs to be configured for Route 53 to become aware of application down scenarios. It will then act on the routing configuration specified

To point Zone Apex Record to another AWS supported end point, you need to Alias resource record set.

Aurora Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.

Aurora automatically replicates data 6 ways across three different availability zones. For other database engines in RDS, to replicate data to a different AZ, you would need to enable multi-az deployment to setup a standby instance in a different availability zone

You have an application that receives traffic only during certain times of an year. Rest of the times, it sees very little traffic.

Aurora Serverless has a pause and resume capability to automatically stop the database compute capacity after a specified period of inactivity. When paused, you are charged only for storage. It automatically resumes when new database connections are requested

In Aurora, Read Replica is promoted as a primary during a primary instance failure. If you do not have an Aurora Read Replica, then Aurora would launch a new instance and promote it to primary. In other RDS products, you would need to use a multi-AZ deployment to configure a standby instance

Aurora supports MySQL or PostgreSQL compatibility when launching an Aurora database. This allows existing tools and clients to connect to Aurora without requiring modification

You have configured an Aurora database with five read replicas. What is the recommended mechanism for clients to connect to read replicas? Each Aurora DB cluster has a reader endpoint. If there is more than one Aurora Replica, the reader endpoint directs each connection request to one of the Aurora Replicas.

You would like to automatically replace instances that are not healthy due to underlying infrastructure or common guest OS related issues. Autoscaling automatically does this.

Data security is the responsibility of the customer. AWS provides capabilities to manage data security; however, it is upto the customer to take advantage of security capabilities based on their individual needs. Physical infrastructure, Facilities, Host Computers, Network infrastructure are all responsibilities of AWS

Recovery Point Objective indicates acceptable amount of data loss measured in time. If disaster strikes at time T, if RPO is 2 hours, then you have process and procedures in place to restore the systems as it appeared at T-2.

Recovery Time Objective captures time it takes to restore business processes to an acceptable level after a disaster.


AWS Five Pillars

  1. Operational Excellence
    • ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.
  2. Security
    • ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
  3. Reliability
    • ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
  4. Performance Efficiency
    • ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve.
  5. Cost Optimisation
    • ability to avoid or eliminate unneeded cost or suboptimal resources.

AWS Shared Responsibility Model

AWS responsibility “Security of the Cloud” - AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.

Customer responsibility “Security in the Cloud” – Customer responsibility will be determined by the AWS Cloud services that a customer selects. This determines the amount of configuration work the customer must perform as part of their security responsibilities. For example, a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks. Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance. For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.

In regards to EC2 which of the following is not a customers responsibility under the shared responsibility model? Decommissioning and destruction of storage media.

Four levels of AWS premium support

Basic, Developer, Business, Enterprise

Credit Card Payments

Regions

14 Regions currently

Network attacks

The AWS platform does not provide you much protection against social engineering attacks the rest of the attacks (Man in the middle, IP Spoofing, Port Scanning) it does provide you protection against.

Route53

After establishing a Direct-Connect service between your VPC and their on-premise network, and confirming all the routing, firewalls, and authentication, you find that while you can resolve names against their DNS, the other company services is unable to resolve names against your DNS servers.

Route53 has a security feature that prevents internal DNS from being read by external sources. The work around is to create a EC2 hosted DNS instance that does zone transfers from the internal DNS, and allows itself to be queried by external servers.

Bottlenecks

You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to avoiding bottlenecks. The design calls for about 20 instances (C3.2xLarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which network configuration should you plan on deploying?

When considering network traffic, you need to understand the difference between storage traffic and general network traffic, and the ways to address each. The 10Gbps is a red-herring, in that the 500Mbps only occurs for short intervals, and therefore your sustained throughput is not 10Gpbs. Whereever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions.