Been a long time since I’ve posted!!
I have been busy with… studying for my AWS exam hahahahaha
Can’t believe I’m taking it - will update with results hopefully in a couple of weeks’ time!!
In the meantime… here are my notes (extremely messy but it might hopefully be of help? lol)
For those who are interested: you can check out the below courses on Udemy, extremely helpful!
- AWS Solutions Architect - Associate (by Ryan Kroonenburg)
- 2019 Practice Test AWS Solutions Architect Associate (by Chandra Lingam)
Just listen to the entire video series by Ryan, and then do all the practice papers like, until you can get like 90%.
Good luck to everyone!!!
UPDATE: HAHAHAHAHA SOMEHOW MANAGED TO SCRAP A PASS!!!! pretty insane lol the test was so difficult it’s not even funny.
Defaults to deny
Enable multifactor authentication by using IAM
New user defaults to no access until explicitly provided
- Policy evaluation:
- Evaluates identity/group level and resource level policies
- An explicit deny in any policy overrides the allow.
- For a given IP address, if there is no deny policy applied or no allow conditions applied, default deny applies.
You cannot create hierarchies of an IAM Group
You cannot assign EC2 instances to a group, only AWS users can be assigned to a group. Roles are the best way to achieve your desired goal without storing credentials for a long period of time.
Cross account access: 1) Resource owner account needs to trust the requester account and requester needs to explicitly delegate permissions to other IAM users in requester’s account. 2) If permissions are not explicitly delegated, only requester’s root account and administrative accounts can access resources in the other account. Account to account trust can be established using IAM Roles or by using resource level policies
To revoke programmatic access for an IAM user: Remove access key credentials.
Managed Policies are automatically version controlled and maintain previous five versions (either AWS or Customer maintained)
Managed policy lives independently of the attached entity. Managed policies are reusable with automatic versioning
You specify a principal using the Amazon Resource Name (ARN) of the AWS account, IAM user, IAM role, federated user, or assumed-role user. You cannot specify IAM groups and instance profiles as principals.
- ARN is transformed to the user’s unique principal ID when the policy is saved. This helps mitigate the risk of someone escalating their privileges by removing and recreating the user. Each user has a unique ID.
With Amazon Cognito, your users can sign in through social identity providers such as Google, Facebook, and Amazon, and through enterprise identity providers such as Microsoft Active Directory via SAML.
User Pools: to store user profiles
Federation/ Identity Pools: You can control access to your backend AWS resources and APIs (IAM roles)
SAML 2.0 based federation. This feature enables federated single sign-on (SSO), so users can log into the AWS Management Console or call the AWS APIs without you having to create an IAM user for everyone in your organization
File size: from 0 bytes to 5TB
Default 100 S3 buckets with a new AWS account.
For PUTS > 100 requests per second and for GETS > 300 requests per second: add a random prefix to the object key in order to distribute the objects across a larger number of S3 nodes
Bucket Name Format: bucket-name.s3-website-region.amazonaws.com
Can write directly to an Edge Location.
S3 provides eventual consistency for overwrite PUTS & DELETES.
Data access auditing: customers who need to capture IAM/user identity information in their logs can configure AWS CloudTrail Data Events.
For S3 Standard, S3 Standard-IA, and S3 Glacier storage classes: in a Region, objects are automatically stored across multiple devices spanning a minimum of three Availability Zones.
For S3 One Zone-IA storage class: in a Region, objects are stored redundantly within a single Availability Zone.
Automatic, asynchronous copying of objects across buckets in different AWS Regions.
Helps with: compliance (if requires storage across Regions), reduces latency, increase operational efficiency, different ownership (owner override - restrict access only to object replicas).
Versioning must be enabled for both buckets.
Replicating object replicates objects’ ACL (access control lists)
Enable versioning to protect accidental overwrites & deletes.
Version stores complete data for every object upload (aka 1 MB of version 1 and 2MB of version 2 will take up 3MB of space)
Only the owner of an S3 bucket can permanently delete a version.
If you delete an object (without specifying version) in the source bucket, then delete marker is created in the source bucket. Delete marker is replicated in the destination bucket as well.
When you delete a specific version of an object in the source bucket, cross region replication does not remove object version in replicated bucket. You need to have a separate lifecycle management policy for replicated bucket.
Cross Account Access
Cross Account Access: S3 ACL allows you to specify the AWS account using email address or the canonical user ID.
If Object owner (account A) created it in a different account’s bucket (account B), Account A has to explicitly grant access to the object for bucket owner (account B) to access it, even though B has full access to the bucket. After that, account B has to explicitly grant access to its IAM users to be able to access it, even though the IAM users have full access to the bucket.
- 99.99% availability, 99.999999999% durability
- Stored redundantly, designed to sustain loss of 2 facilities concurrently
S3 Infrequently Accessed
- 99.9% availability
- Rapid access when needed
- Minimum object size of 128KB
- Minimum charge of 30 days
S3 One Zone IA
- 99.5% availability
- No need for data resilience
- Use when able to regenerate data
S3 Intelligent Tiering
- Automatically move to optimise cost
- To restore an object from Glacier, use the S3 API or the AWS Console.
- Glacier automatically encrypts using AES 256. It handles the key management for you.
- Standard Retrieval takes 3-5 hours
- Bulk Retrieval takes 5-12 hours
- Range Retrieval allows you to retrieve only specified byte ranges. You pay only for the actual data retrieved.
- Bulk retrieval is lowest cost option to retrieve data from Glacier and can used to cost-effectively retrieve large amounts of data.
- Expedited Retrieval can be used for Occasional requests and typically, data is retrieved between 1-5 minutes (for files < 250 MB). To guarantee expedited retrieval availability, you can purchase provisioned capacity.
- To ensure that project documents are not deleted or tampered with (for compliance reasons): Use Vault Lock. You can specify controls such as “write once read many” (WORM) in a vault lock policy and lock the policy from future edits. Once locked, the policy can no longer be changed. You use a vault access policy to implement access controls that are not compliance related, temporary, and subject to frequent modification. Vault lock and vault access policies can be used together. For example, you can implement time-based data retention rules in the vault lock policy (deny deletes), and grant read access to designated third parties or your business partners (allow reads). IAM Access policy dictates who has access to vaults; but on its own is not sufficient for compliance related controls.
S3 Deep Glacier
- 12 hours retrieval time
S3 RRS (Reduced Redundancy Storage)
- 99.99% availability and durability
- To store noncritical, reproducible data.
- Designed to sustain the loss of data in a single facility, but not replicated so many times.
File Gateways: File gateway presents a file-based interface to Amazon S3, which appears as a network file share. Your most recently used data is cached on the gateway for low-latency access, and data transfer between your data center and AWS is fully managed and optimized by the gateway. You can run file gateway on-premises or in EC2.
- Volume Gateways:
Storage Gateway with Gateway-Cached Volumes: your primary data is written to S3, while retaining your frequently accessed data locally in a cache for low-latency access.
Storage Gateway with Gateway-Stored Volumes: your primary data is stored locally and your entire dataset is available for low-latency access while asynchronously backed up to AWS. Use this when data needs to be available even when connection over the internet is down.
- Tape Gateways: Tape gateway is a cloud-based Virtual Tape Library (VTL).
- CloudFront allows you to cache the content closest to your customer using AWS global Edge network.
- CloudFront support S3, Websites hosted in AWS as well as external websites that are hosted on-premises.
- CloudFront also supports dynamic content that are personalized based on signed-on user
Is a global service (like IAM)
- You want to distribute content in S3 bucket using CloudFront edge locations. You also want to restrict access to the content to only the users who are authorized by your application.
- Configure content to be accessible only using signed URLs or signed cookies in cloudfront
- Create a Cloudfront user known as Origin Access Identity (OAI) and grant read access to S3 bucket of OAI
- Remove all other permissions.
network access logging: You should make use of an OS level logging tools such as iptables and log events to CloudWatch or S3.
Underlying Hypervisor for EC2: Xen/ KVM (Nitro)/ Bare-metal
Amazon EC2 bare metal instances provide your applications with direct access to the Intel® Xeon® Scalable processor and memory resources of the underlying server. These instances are ideal for workloads that require access to the hardware feature set (such as Intel® VT-x), for applications that need to run in non-virtualized environments for licensing or support requirements, or for customers who wish to use their own hypervisor.
Restricted to a single availability zone!!
Cluster: For low-latency/ high throughput, a cluster placement group can only exist within 1 availability zone.
Spread: Spread placement group for small critical instances to keep separate from each other to minimise risk - different racks.
Uptime SLA for EC2 and EBS
- 5 Elastic IP addresses per region.
- Associated with AWS account, not a particular instance.
VM Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment.
Windows Server Licenses
A dedicated host is required to use your existing Windows Server licenses.
Each instance type supports one or both of the following types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The virtualization type of your instance is determined by the AMI that you use to launch it.
Linux Amazon Machine Images use one of two types of virtualization: paravirtual (PV) or hardware virtual machine (HVM). The main differences between PV and HVM AMIs are the way in which they boot and whether they can take advantage of special hardware extensions (CPU, network, and storage) for better performance.
Types of EC2
- Fight Dr McPxz AU
H: High Disk throughput
M: Main choice for general purpose
T: Cheap general purpose
P: Pics (for graphics)
X: Extreme memory (for databases) - X1e was created to run high performance databases
Z: Extreme memory and compute
D: Density (hadoop)
10,000 IOPS: EBS General Purpose SSD greater than 3.3 TB (max 10,000 IOPS)
16,000 IOPS: EBS Provisioned IOPS SSD provides sustained performance for mission-critical low-latency workloads
75,000 IOPS: SSD Based instance storage. Reason: when using EBS volumes, traffic is routed through network and you may hit instance specific upper limit on supported IOPS. Maximum EBS IOPS/instance is 75,000 (irrespective of how many EBS volumes are attached to the instance).
Random I/O: use SSD based storage.
Infrequently accessed large, sequential, cold data workloads at very low costs: Cold Storage
Memory metrics not automatically collected. Report to cloudwatch for tracking.
Auto Scaling can automatically maintain desired capacity and replace unhealthy instances
Instance in AZ with most instances will terminate first - with oldest launch configuration.
Moving EC2 volumes
Cross AZ: Take a snapshot, create an AMI, then create in another availability zone
Cross Region: Take a snapshot, create an AMI, copy the AMI to the other region, then create in another region
AWS does not copy launch permissions, user-defined tags, or Amazon S3 bucket permissions from the source AMI to the new AMI; these need to be manually applied.
Change instance families
Convertible RIs cannot be sold as unused RI capacity on Reserved Instance Marketplace
EBS or Instance Store
EBS Backed Volume: Can be stopped. Can be rebooted. Deleted on default, but can select option to keep the root device volume.
If an Amazon EBS volume is an additional partition (ie not the root volume), can I detach it without stopping the instance? Yes, although it might take some time.
Instance Store: Ephemeral. If underlying host fails, data is lost. They cannot be stopped! You can reboot it. Deleted on termination by default.
Encrypting Root Device Volume
1) Create snapshot of unencrypted root device volume
2) Encrypt the snapshot when creating a copy
3) Create an AMI from the encrypted snapshot
4) Launch the new encrypted instance using the AMI
Purchased at a fixed rate per hour. AWS recommends using On-Demand Instances for applications with short-term, irregular workloads that cannot be interrupted.
Use when you can be flexible about when applications run and can be interrupted.
Suited if consistent, heavy, predictable usage. Pay for the entire term regardless of the usage
Get public hostname of EC2
Get public and private IP address of EC2
When an EC2 instance with an associated Elastic IP is stopped and restarted, the instance will restart on a different physical host, and all instance-store data will be lost.
There are brief, strong, and entirely unpredictable spikes in traffic that overwhelm your EC2 instances’ resources and freeze the application. The cost-effective solution to the unpredictable spike in traffic is to use SQS to decouple the application components. Pre-warming an ELB signifies that these spikes in traffic are predictable.
Maximum volume size of a single EBS volume is 16 TiB. If you need more EBS storage, one option is to stripe across multiple EBS volumes in RAID 0 configuration. RAID 1 offers redundancy through mirroring, i.e., data is written identically to two drives. RAID 0 offers no redundancy and instead uses striping, i.e., data is split across all the drives.
RAID 5 is a redundant array of independent disks configuration that uses disk striping with parity. Because data and parity are striped evenly across all of the disks, no single disk is a bottleneck. Striping also allows users to reconstruct data in case of a disk failure.
For best peformance, use provisioned IOPS (PIOPS) in a RAID 0, which will give you better performance than RAID 5 as you are now writing parity to your partition.
EBS volumes support live configuration changes while in production. You can modify volume type, volume size, and IOPS capacity without service interruptions.
An EBS volume and the instance to which it attaches must be in the same Availability Zone.
When you create an EBS volume in an Availability Zone, it is automatically replicated within that zone to prevent data loss due to failure of any single hardware component.
Uncheck the ‘Delete on Termination’ checkbox when you configure EBS volumes for your instance on the EC2 console. You continue to pay for the volume usage as long as the data persists.
Create snapshots (backups) of any EBS volume and write a copy of the data in the volume to Amazon S3, where it is stored redundantly in multiple Availability Zones.
By optionally specifying a different Availability Zone, you can use this functionality to create a duplicate volume in that zone.
When you create snapshots, you incur charges in Amazon S3 based on the volume’s total size. For a successive snapshot of the volume, you are only charged for any additional data beyond the volume’s original size. If you have a volume with 100 GiB of data, but only 5 GiB of data have changed since your last snapshot, only the 5 GiB of modified data is written to Amazon S3. Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume.
The volume does not need to be attached to a running instance in order to take a snapshot.
For a consistent snapshot of an EBS Volume:
- Ensure application flushes any cached data to disk
- No other write I/O is performed by file system on that volume
- Issue snapshot command: takes a point-in-time and it takes only few seconds to complete this step
You can start using the volume after this. Snapshot data copy happens in the background and you don’t have to wait for data copy to complete.
- If the application holds large amounts of data in cache that is not written to disk automatically, and you need to take a consistent snapshot of the instance, shutdown the EC2 instance, detach the EBS volume, then take the snapshot.
Change encryption for EBS volume
- Change the key during snapshot copy process
- Or, mount a new EBS volume with the desired key and copy data from old volume to new volume
ECS (Elastic Container Service)
ECS Scheduler is responsible for placing the tasks on container instances.
Service is where you configure long running tasks and how many containers you need.
For each task copy, containers that are defined as part of a single task definition are placed together.
ECS Instance Role is used for granting permissions to the EC2 instance. All containers running on that instance will gain privileges granted with that role.
ECS Task Role allows you to grant fine grained access based on task specific needs. All containers that are part of this task will gain privileges granted as part of the role.
Install SSL certificates on ELBs so there is less load on the EC2 instances. Make use of TLS (Transport Layer Security) connections that terminate at a Load Balancer (you can think of TLS as providing the “S” in HTTPS). This will free your backend servers from the compute-intensive work of encrypting and decrypting all of your traffic. TLS termination is now supported by Classic, Application and Network Load balancers.
In order to handle the internet traffic, EC2 instances registered with the load balancer must have private IP address. Elastic load balancers route requests to your instance using private IP address.
Elastic Load Balancer acts a middleman. It receives requests from clients and the load balancer nodes in-turn send the request to the EC2 instances. Similarly, responses are sent by EC2 instances back to Load Balancer nodes and client receive response back from load balancer node
Application Load Balancer
- Smart Load Balancer
- Only HTTP/HTTPS based apps!
- Supports Dynamic Port mapping with ECS. If you have a service with two containers, you need at least two ECS container instance because multiple containers can’t run on the same port on the same server, each container is hosted on a separate server. Dynamic Port Mapping with ECS allows you to run two containers of a service on a single server on dynamic ports which ALB automatically detects and reconfigures itself.
- People use Application Load Balancers because they scale automatically to adapt to changes in your traffic. This makes planning for growth easy, but it has a side effect of changing the IP addresses that clients connect to.
- Allow to add load balancing target by IP address (good for distributing loads across on-prem and AWS webservers) but only using private IP address AND on-premises data center should have a VPN connection to AWS VPC or a Direct Connect link to your AWS infrastructure.
Classic Load Balancer
- Basic round robin
- Only Static Port mapping (Container Port to Host Port mapping) scheme is supported
- Only Classic Load Balancer can be deployed in a EC2-Classic network
- Layer 4 load balancer: For Internet traffic specifically, a Layer 4 load balancer bases the load-balancing decision on the source and destination IP addresses and ports recorded in the packet header
Network Load Balancer
- Extreme performance, ultra low latencies
- Layer 4, TCP load balancer: For Internet traffic specifically, a Layer 4 load balancer bases the load-balancing decision on the source and destination IP addresses and ports recorded in the packet header
- Supports HTTP, HTTPS, TCP, SSL
- Only product which assigns a static IP address per availability zone where it is deployed - good for our firewalls’ whitelisting. (For ALB and CLB, have to whitelist by name.)
- Allow to add load balancing target by IP address (good for distributing loads across on-prem and AWS webservers) but only using private IP address AND on-premises data center should have a VPN connection to AWS VPC or a Direct Connect link to your AWS infrastructure.
Sticky Session When Sticky session is enabled, request from a client are routed to the same server. However, be aware that it can cause unpredictable behavior when traffic pattern shifts very quickly. Newly added capacity may not be used to handle existing user workload. When sticky session is disabled, requests are evenly distributed across all available instances
To get requestor IP address
- NLB forwards the Requester IP address to your application. Your instance would know who the requester is.
- ALB intercepts traffic between clients and your back-end instances; the access logs for your back-end instances contain the IP address of the load balancer instead of the requester. Enable and use ALB Access Logs to get IP address of requester.
To load balance TCP
- For newer applications, AWS recommends using Network Load Balancer.
- Can also use Classic
Public vs Private IP address
A public IP address is an IP address that can be accessed over the Internet. Like postal address used to deliver a postal mail to your home, a public IP address is the globally unique IP address assigned to a computing device.
Private IP address, on the other hand, is used to assign computers within your private space without letting them directly expose to the Internet (e.g. 10.0.0.0, 172.16.0.0, 192.168.0.0)
The public IP address is not managed on the instance. It is an alias applied as a network address translation of the Private IP Address. The public IP address is mapped to the primary private IP address through network address translation (NAT).
When you launch an instance in a default VPC, we assign it a public IP address by default. When you launch an instance into a nondefault VPC, the subnet has an attribute that determines whether instances launched into that subnet receive a public IP address from the public IPv4 address pool. By default, we don’t assign a public IP address to instances launched in a nondefault subnet.
Public IP VS Elastic IP
Requests a public IP address from Amazon’s public IP address pool, to make your instance reachable from the Internet. In most cases, the public IP address is associated with the instance until it’s stopped or terminated, after which it’s no longer available for you to use.
If you require a persistent public IP address that you can associate and disassociate at will, use an Elastic IP address (EIP) instead. You can allocate your own EIP, and associate it to your instance after launch.
- Network Load Balancer does not currently support security groups. If you are using Network Load Balancer, then you can restrict access to specific CIDR ranges using EC2 instance security group.
- ALB and Classic LB support security groups. You can optionally, restrict access using Network ACL in the subnet where you have deployed load balancer nodes
When creating a new security group, all in bound traffic is NOT allowed by default.
When I create a new security group, all outbound traffic is allowed by default.
Cross Zone Load Balancing
If cross-zone load balancing is enabled, the load balancer distributes traffic evenly across all registered instances in all enabled Availability Zones. In this case Availability Zone A has one instance and B has three instances. Each instance is receiving 25% of the traffic. When cross zone load balancing is disabled, each availability zone would get 50% of the traffic.
With the Resource Groups tool, you use a single page to view and manage your resources.
AWS CloudTrail records AWS Management Console actions and API calls. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred.
AWS CloudWatch enables you to gain system-wide visibility into resource utilization (e.g. CPU utilisation, memory usage), application performance, and operational health.
Collect metrics and logs from all your AWS resources and automatically publish detailed 1-minute metrics and custom metrics with up to 1-second granularity.
Stores metrics for 2 weeks. Extended data retention of up to 15 months is available.
CloudWatch Logs provide all the plumbing infrastructure to gather log files, store in CloudWatch, retrieve the log file content when needed, specify desired retention period.
Change scale-down metric on CloudWatch to a higher threshold to stop scaling up and down multiple times an hour.
By default, Database-visible metrics such as the number of users is available. (CloudWatch for RDS)
When EC2 instance is recovered using Cloudwatch Alarm, instance is moved to a different physical host. Same metadata including public IP and private IP address.
One alarm one metric.
AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources.
System status check
Identify AWS infrastructure related issues. physical host, network connectivity to host, system power.
Instance status check
Also fails if system status check fails.
Encryption in Transit
- HTTPS (achieved by SSL or TLS)
Encryption at Rest
- Server-Side Encryption:
- SSE-S3 (e.g. AES-256): You should choose SSE-S3 if you prefer to have Amazon manage your keys.
- SSE-KMS: Client and AWS manage keys together. With AWS KMS, there are separate permissions for the use of the master key. AWS KMS provides an audit trail so you can see who used your key to access which object and when, as well as view failed attempts to access data from users without permission to decrypt the data. Use KMS if you need audit trails.
- SSE-C: Use SSE-C if you want to maintain your own encryption keys, but don’t want to implement or leverage a client-side encryption library.
- Client-Side Encrpytion (encrypt then upload): Using an encryption client library (e.g. Amazon S3 Encryption Client) you retain control of the keys and complete the encryption and decryption of objects client-side. Only encrypted objects are transmitted over the Internet to Amazon S3.
Type of CMK Can view Can manage Used only for my AWS account Customer managed CMK Yes Yes Yes AWS managed CMK Yes No Yes AWS owned CMK No No No
The primary resources in AWS KMS are customer master keys (CMKs). You can use a CMK to encrypt and decrypt up to 4 KB (4096 bytes) of data. Typically, you use CMKs to generate, encrypt, and decrypt the data keys that you use outside of AWS KMS to encrypt your data. This strategy is known as envelope encryption.
Data keys are encryption keys that you can use to encrypt data, including large amounts of data and other data encryption keys.
AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud. Locate it near EC2 to decrease network latency.
To protect EC2:
- IDS: Intrusion Detection System
IPS: Intrusion Prevention System
VPN encryption prevents third parties from reading your data as it passes through the internet. IPSec and SSL are the two most popular secure network protocol suites used in Virtual Private Networks, or VPNs. IPSec and SSL are both designed to secure data in transit through encryption.
VPC allows you to connect your cloud resources to your own IPSec VPN connections
- To audit inbound and outbound traffic in your VPC: VPC flow log
AWS Simple Queue Service (SQS)
With SQS, you must implement your own application-level tracking, especially if your application uses multiple queues, to keep track of all tasks and events in an application.
SQS is the cornerstone of a decoupled application.
Messages can be retained in queues for up to 14 days. Minimum is 1 minute
Maximum size of messages are between 1KB and 256 KB.
Dead Letter Queue allows you to capture poison pill messages that application is unable to process. When Dead Letter Queue is configured, SQS Service would automatically move the message to DLQ after specified number of delivery attempts
Standard Queue offers best effort ordering and in rare cases, can send duplicate messages (even if within visibility timeout window).
Standard queues offer maximum throughput, best-effort ordering, and at-least-once delivery. (Pull)
Increasing the visibility timeout will not decrease cost over time. Decreasing size of SQS messages decreases cost.
SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent.
FIFO queue increases concurrency by allowing concurrent processing of messages across different Groups.
maximum VisibilityTimeout of an SQS message in a FIFO queue: 12 hours
Visibility timeout: messages are invisible for this period of time. If timeout ends before processing is completed, the ticket might be sent out again another time - to avoid this, increase timeout time to match processing time.
Long VS Short Polling
Long polling: doesn’t return a reply until a message appears.
An application polls SQS Standard Queue for processing pending messages. Application polls with a batch size set to 10 and long polling wait time set to 10 seconds. There is only one message currently available in the queue. What will happen when the application makes a long polling receive request? Returns immediately with 1 message. (maximum of 10 messages). Response to the ReceiveMessage request contains at least one of the available messages and up to the maximum number of messages specified in the ReceiveMessage action.
With short-polling, multiple polls of the queue may be necessary to process all messages in the queue.
Short polling may fail to retrieve messages sometimes, but if no messages can be retrieved after multiple attempts, permissions are the more likely cause.
Since application is single threaded, this is one corner case where long polling will not work as it will block the thread and prevent you from processing messages in other queue. Short polling should be used for this specific scenario. Otherwise, in general Long Polling is recommended
You want to process high priority messages first always, then medium, then low. Have 3 separate queues for each priority.
AWS Simple Notification Service (SNS)
Amazon Simple Notification Service (Amazon SNS) is a web service that makes it easy to set up, operate, and send notifications from the cloud. It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications.
Push instead of pull.
Use Amazon SNS to trigger the processing pipelines when new content is updated, and Amazon SQS to decouple incoming jobs from pipeline processors.
SNS Topic is useful for broadcasting a message to multiple subscribers. However, subscribers that are down for extended period can lose messages. To prevent this, you can have a per consumer SQS Queue. SNS Topic can broadcast messages to multiple queues. Queue retention is configurable from 1 minute to up to 14 days. When consumer systems are back online, they can process the messages pending in their queue
SNS is suitable for broadcasting time sensitive information to multiple consumers or sending message to a single consumer
AWS Simple Email Service (SES)
Amazon Simple Email Service (Amazon SES) is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails. It is a reliable, cost-effective service for businesses of all sizes that use email to keep in contact with their customers.
AWS Software Development Kit (SDK)
A collection of software tools for the development of applications
Prove identity with AWS SES & ISP (Internet Service Provider) when sending emails Sender Policy Framework (SPF) is for identifying email servers that are authorized to send emails on your domain’s behalf. This information is specified as part of your DNS resource records. A recipient can query DNS service to cross check the server name and detect if somebody is spoofing your email address
DomainKeys Identified Mail (DKIM) is for protecting your email messages from tampering. It is done using digital signing and your public key needs to be listed as part of your DNS resource records. Recipient can query DNS service to get public key and cross check the signature. Best practice is to use both these methods
AWS Simple WorkFlow Service (SWF)
If your app’s steps take more than 500 milliseconds to complete, you need to track the state of processing, and you need to recover or retry if a task fails, Amazon SWF can help you.
Use when failures need to be detected and handled through Amazon SWF’s cloud workflow management.
Convert (or “transcode”) media files from their source format into versions that will playback on devices like smartphones, tablets and PCs.
API Gateway handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management.
Allows caching. Write to CloudWatch.
Amazon Kinesis enables you to process and analyze data as it arrives and respond instantly instead of having to wait until all your data is collected before the processing can begin.
Streams: persists data in shards, process in near real-time. If you split a shard in Kinesis Streams, existing data records in the parent shard remain and new data is sent to child shards. Kinesis Streams allows multiple consumers to read data available in the streams.
Firehose: processes in real-time
Configure Kinesis Firehose to load to Redshift.
Kinesis Streams has a maximum retention of 7 days and Kinesis Firehose has a retention of 1 day
Deeper analysis over longer duration should be considered as a batch processing use case. These are not stream processing use cases. You can store Kinesis data in other systems like RedShift for deeper analysis
- Lambda is a managed environment and typically you are not allowed to customize the Operating System or Execution Environment
On-Premises: Get your own physical servers
Infrastructure As A Service (IaaS): EC2 - Servers provided with just an API call
Platform As A Service (PaaS): Elastic Beanstalk - entire process provided
Software As A Service (SaaS): Wordpress - just use the software
Containers: Docker - provide the container and everything else is settled (BUT still need to manage the containers using Kubernetes)
Functions As A Service (FaaS) aka Severless: Lambda - provide your code and everything else is handled.
Lets you run code without provisioning or managing servers. You pay only for the compute time you consume - there is no charge when your code is not running.
Lambda functions, by default, are allowed access to internet resources. To access databases and other resources in your VPC, you need to configure Lambda function to run inside the context of a private subnet in your VPC. When this is done: your lambda function gets a private IP address and can reach resources in your VPC. In this mode, it can access internet services only if private subnet has a route to a NAT device
There is an upper limit on number of concurrent lambda function executions for your account in each region. You can optionally specify concurrent execution limit at a function level to prevent too many concurrent executions. In this case, lambda executions that exceed the limit are throttled. When synchronously invoked, caller is responsible for retries. When asynchronously invoked, Lambda service automatically retry twice. You can configure Dead Lead Queue where failed events can be stored. S3 invokes Lambda asynchronously and unit of concurrency is the number of configured events
Lambda support versioning and you can maintain one or more versions of your lambda function. Each lambda function has a unique ARN. Lambda also supports Alias for each of your functions. Lambda alias is a pointer to a specific lambda function version. Alias enables you to promote new lambda function versions to production and if you need to rollback a function, you can simply update the alias to point to the desired version. Event source needs to use Alias ARN for invoking the lambda function.
Lambda cannot listen on input ports. Use API gateway to listen, receive then invoke Lambda.
Lambda may incur a startup delay if functions are invoked after a long period of idle.
With Lambda, you have to choose amount of memory needed to execute your function. Based on the memory configuration, proportional CPU capacity is allocated.
Redshift is optimized for batched write operations and for reading high volumes of data.
Columnar storage minimizes I/O and maximize data throughput by retrieving only the blocks that contain data for the selected columns.
It is not meant for high frequency update use cases typically seen in OLTP systems
Up to 1/10th cost of other warehouse technologies
Deploy resources at scale, completely scripting your cloud environment.
Use to build a reproducible, version-controlled infrastructure.
Scripted in JSON or YAML
By default, CloudFormation ensures all or nothing deployment. If there is an error at any step and CloudFormation is not able to proceed, then it will remove all AWS resources in a stack that were created by CloudFormation
You can use GetAtt function to query the value of an attribute from a resource in the template.
CloudFormation does not check for account limits. So, it is possible that your stack creation may fail if it exceeds account limits
A company has network team that is responsible for creating and managing VPCs, Subnets, Security Groups and so forth. Application teams are required to use these existing VPCs, Security Groups for their application instances. In CloudFormation, what capability can you use to refer to these common resources that were created in other stacks: Cross Stack References.
Common resources can be managed using a separate stack. Other stacks can simply refer to the existing resources using cross-stack references. This allows independent teams to be responsible for their resources. When creating a template, you can indicate what resources are available for cross stack references by exporting those values (Export output field). Other stacks can use Fn::ImportValue function to import the value.
Nested Stacks are used for common templates for creating the same type of resources across multiple stacks. For example: elastic load balancer required by each application
You can provision your Elastic beanstalk resources in an existing VPC
Just upload the application - Elastic Beanstalk handles everything else.
proactive Cycle Scaling: automatically start up and shutdown vms during peak periods
proactive Event-Based Scaling: Can automatiically scale in anticapation peaks caused by certain events. E.g. black-friday, boxing day, half price sale days.
Releasing new version of your application software: You can upgrade an existing environment or create a brand new environment for the application version.
Rolling deployment – Updates a batch of instances. Each batch is taken out of service and available capacity is reduced by the number of instances in the batch. All at once deploys new version to all instances simultaneously. Instances are out of service for a short period. Rolling with additional batch – Launches additional batch of instances to maintain full capacity during deployment. It deploys version in batches. Immutable – Deploys new version to a fresh set of instances
Swap Environment URL option in Elastic Beanstalk is convenient for handling blue/green deployment scenarios.
Deploying RDS instances with Elastic Beanstalk is not recommended. When you delete an environment, you will lose the database. In addition, deploying database with application forces you to rev both at the same time. This is not recommended for production as you need flexibility to update database and application at their own cadence
To store RDS Database Backups for a period 5 years, take periodic snapshots.
When you initiate a point-in-time recovery, transaction logs are applied to the most appropriate daily backup in order to restore your DB instance to the specific time you requested. You can initiate a point-in-time restore and specify any second during your retention period, up to the Latest Restorable Time
Source Action is monitoring source code control system like github or AWS CodeCommit or S3 versioned bucket and trigger automatic pipeline invocation when new version of code is available.
Build Action is for creating software binaries from source code.
Test action is used for running tests against your code.
Deploy Action is used for deploying your code using variety of deployment providers like CloudFormation, CodeDeploy, ECS, Elastic Beanstalk and more.
Approval Action is used for manual approvals of a stage in a pipeline.
Invoke Action is used for performing custom action
- Create read replica and point to it (only supported for MySQL and PostgreSQL!)
- Use Elasticache to increase performance
RDS Read replica is created based on asynchronous replication technology and does not impact primary db transactions. Read-replica may see a backlog build up if there are momentary interruptions
Scale out is NOT supported in RDS/MySql for increasing write throughput.
RDS (OLTP - Online Transaction Processing)
- SQL, MySQL, PostgreSQL, Oracle, Aurora, MariaDB
DynamoDB (OLAP - Online Analytical Processing/ NoSQL). It has to scan the entire table if no index/secondary index for search criteria.
Both DynamoDB and ElastiCache provide high performance storage of key-value pairs. Storage Gateway is a storage service, but it is a hybrid storage service that enables on-premises applications to use cloud storage.
DynamoDB provides consistent, single-digit millisecond latency at any scale. ElastiCache provides sub-millisecond latency to power real-time applications.
Amazon RDS does not currently support increasing storage on a SQL Server Db instance
In RDS, changes to the backup window take effect immediately.
In RDS what is the maximum size for a Microsoft SQL Server DB Instance with SQL Server Express edition: 10Gb
You can conduct your own vulnerability scans within your own VPC without alerting AWS first? No.
DynamoDB automatically scales throughput capacity to meet workload demands, and partitions and repartitions your data as your table size grows. Also, DynamoDB synchronously replicates data across three facilities in an AWS Region, giving you high availability and data durability. DynamoDB is automatically redundant across multiple availability zones.
Eventually consistent reads (the default) – The eventual consistency option maximizes your read throughput.
A strongly consistent read (option to choose) returns a result that reflects all writes that received a successful response before the read.
You are a consultant planning to deploy DynamoDB across three AZs. Your lead DBA is concerned about data consistency. Which of the following do you advise the lead DBA to do?
- Code for strongly consistent reads. As the consultant you will advise the CTO on the increased cost.
Streams record DynamoDB item changes in order. Lambda configured to poll and update ElastiCache provides a convenient mechanism to update the cache
Capacity Reservation allows you to obtain discounts on DynamoDB provisioned throughput capacity. This requires 1 year or 3 year commitment and applies for a REGION for which the capacity was purchased.
DynamoDB Global Tables are designed for massively scaled multi-master replication across AWS regions. This takes care of automatically replicating changes happening in the table that are happening in different regions. You can use this to provide low latency access to data irrespective where the user is located. S3 Cross Region Replication is meant for one way synchronization between two S3 regions. It is not designed for two-way or multi-way replication. ElastiCache is region specific service and does not perform automatic replication; you would end up writing logic for replicating data.
The maximum item size in DynamoDB is 400 KB (combined value and name).
DB Instance: database environment in the cloud with the compute and storage resources you specify
RDS manages the setting up: provisioning the infrastructure capacity; installing the database software.
RDS automates common administrative tasks: performing backups (1 day retention period by default, maximum of 35 days) and patching the software that powers your database.
With optional Multi-AZ deployments, Amazon RDS also manages synchronous data replication across Availability Zones with automatic failover.
For multi-AZ high availability, RDS uses synchronous replication between primary and standby systems. If standby is slow, transactions will take longer to complete. RDS Read Replica on the other hand uses asynchronous replication and any slowness in Read Replica instance would simply cause data lag in the read - replica. Transactions in primary is not impacted
Native database access: you’re still responsible for managing the database settings, building the relational schema, responsible for any performance tuning to optimize your database for your application’s workflow.
40 RDS DB Instances by default.
Within one instance, up to 100 for SQL Server, 1 for Oracle (no limit on schemas). Others, no limit.
To move an RDS instance: Take a snapshot of the RDS instance and create it inside your VPC.
Automatic backups are deleted when instance is deleted. A final snapshot may have been created if ‘SkipFinalSnapshot’ is selected.
Upgrade to larger instance class: to minimise disruption, schedule during least customer period; downtime is a couple of minutes.
RDS VS relational database AMI Amazon RDS: offloads database administration. Relational database AMIs on EC2: manage your own relational database in the cloud.
Use Multi-AZ (for disaster recovery! not for improving performance).
Use Multi-AZ to take backups from IO intensive database so IO activity is not suspended (backup taken from the standby)
For performance improvement, use Read Replicas. Max 5 read replicas? Asynchronous replication. Must have automatic backups turned on to deploy a read replica.
Cannot have multi-AZ copy of your read replica.
After Elasticache, read replicas, cloudfront and S3 for caching - implement database partitioning and spread data across multiple DB instances.
To encrypt an existing database, create a new instance with encryption enabled and migrate data into it.
Once a VPC is set to Dedicated hosting, it is not possible to change the VPC or the instances to Default hosting. You must re-create the VPC.
You work for a automotive company which is migrating their production environment in to AWS. The company has 4 separate segments, Dev, Test, UAT & Production. They require each segment to be logically isolated from each other. What VPC configuration should you recommend? Deploy a separate VPC for each segment, completely isolating that segment from the other segments.
how many VPCs can you have per region in your AWS account? 5.
- VPC Endpoint allows you to access supported services (currently S3 and DynamoDB) directly using private connection between VPC and supported AWS Service.
- This allows you to send application traffic without needed internet gateway or NAT device.
- VPC endpoint is supported only within the SAME REGION i.e., application and the AWS Service needs to be in the same region
- EC2 instances access S3 using Public IP Address and traffic is routed through internet gateway. If VPC endpoint is used, S3 is accessed using AWS private network. In this case, bucket policy can use VPC ID or VPC Endpoint ID to restrict access. Note: Private IP Addresses are not supported in policies as multiple VPCs can share the same CIDR block
Elastic Network Interface (ENI) Every instance in a VPC has a default network interface, called the primary network interface (eth0). You cannot detach a primary network interface from an instance. You can create and attach additional network interfaces.
ENIs can be used in the following scenarios:
- Deploying a high-availability cluster (multiple network interfaces on a single instance)
- Providing a low-cost failover solution (You can detach an ENI from a failed ECS instance and then attach it to another ECS instance to quickly redirect traffic from the failed instance to a backup instance, thereby quickly restoring your services)
Traffic Control A security group can grant access to traffic from the allowed networks via the CIDR range for each network.
By default all subnets will be able to communicate with each other using the main route table.
Network Address Translation (NAT): used to allow internet traffic to private subnet.
NAT instance: need to disable source and destination checks.
NAT Gateways: multiple NAT gateways across Availability Zones so it is not a single point of failure.
VPC endpoint: talk straight to S3 (for e.g. without going through the internet. uses AWS network)
Assign elastic IP to the instance to provide internet access.
Virtual Private Gateway: assign a public ip address to VPG to allow for site-to-site VPN connection.
An Amazon VPC VPN connection links your data center (or network) to your Amazon VPC virtual private cloud (VPC). A customer gateway is the anchor on your side of that connection. It can be a physical or software appliance. The anchor on the AWS side of the VPN connection is called a virtual private gateway.
Create a VPC: 1) Create a VPC (max is /16) 2) Security Group, Route Table, Network ACL, are created by default. 3) Create subnets (10.0.1.0/24, 10.0.2.0/24). Change one to automatically assign IP addresses, that will be the public subnet. One subnet one availability zone!!! 4) Create one internet gateway (one per VPC) 5) Keep main route table private! Because every subnet by default is allocated to the main route table. Have a separate table for public subnet. Create route from public route table (for IPv4 0.0.0.0/0 and IPv6 ::/0) to internet gateway.
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources into a virtual network that you’ve defined.
A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account. The VPCs can be in different regions (also known as an inter-region VPC peering connection).
Edge-to-edge routing is not allowed through a VPN connection.
A bastion host sits in a public subnet, and serves as a secure gateway through which one SSHes into instances in a private subnet.
After setting up a VPC peering connection between your VPC and that of your clients, the client requests to be able to send traffic between instances in the peered VPCs using private IP addresses. If a route is added to your Route Table, your client will have access to your instance via private IP address.
A placement group may not span paired VPCs or multiple Regions. Placement Groups are limited to a single AZ.
- Cluster – packs instances close together inside an Availability Zone. This strategy enables workloads to achieve the low-latency network performance necessary for tightly-coupled node-to-node communication that is typical of HPC applications.
- Partition – spreads your instances across logical partitions such that groups of instances in one partition do not share the underlying hardware with groups of instances in different partitions. This strategy is typically used by large distributed and replicated workloads, such as Hadoop, Cassandra, and Kafka.
- Spread – strictly places a small group of instances across distinct underlying hardware to reduce correlated failures.
The online application must be in public subnets to allow access from clients’ browsers. The database cluster must be in private subnets to meet the requirement that there be no access from the Internet. A NAT Gateway is required to give the database cluster the ability to download patches from the Internet. NAT Gateways must be deployed in public subnets.
ELB nodes are deployed in your VPC subnet. Lambda functions when configured to access your VPC private resources will use up addresses in the assigned subnet. EC2 instances are assigned address from your subnet
Bastion Host with a single well-known access point is the recommended option and you can let your customers access the EC2 instances using private DNS name or private IP Addresses. Bastion Host also improves security posture as it reduces attack surface by keeping your EC2 instances in private subnet. You can tighten instances’ security group to allow access only from Bastion Host security group. Private DNS Name and Private IP Address remains attached to the instance until the instance is terminated.
CNames cannot be used on naked domain names (without www.
Each /8 block contains 2^24 = 16,777,216 addresses.
/28 - smallest possible subnet in AWS VPC.
Size of SSD volumes: 1 GiB - 16 TiB
Elastic Map Reduce (EMR)
Scalable and Reliable Solution
- Scalable: resilient and operationally efficient, and decrease cost at scale.
AWS Server Migration
- Max number of VMWare VMs migrated concurrently: 50
While an SQS queue can be an important part of a decoupled web application, it is not required when hosting a highly available static website on EC2. An auto scaling group configured to deploy EC2 instances in multiple subnets located in multiple availability zones allows an application to remain online despite an instance or AZ failure.
Auto scaling is not really intended to respond to instantaneous spikes in traffic, as it will take some time to spin-up the instances that will handle the additional traffic. For sudden traffic spikes, make sure your application issues a 503 - Service Unavailable message.
The pillars of the AWS Well Architected Framework are Security, Reliability, Performance Efficiency, and Cost Optimization.
DynamoDB and Amazon RDS are managed services. As such, AWS handles the ongoing maintenance.
write cron job that uses AWS CLI to take snapshots of EBS volume. The data from an EBS volume snapshot is durable because EBS snapshots are stored on the Amazon S3-Standard.
Access to the underlying operating system is granted for Elastic Map Reduce and Elastic Beanstalk. The others are managed services.
A team is building an application that must store persistent JSON data and be able to have an index. Data access must remain consistent if there is high traffic volume. Use DynamoDB.
A unit of Read Capacity enables you to perform one strongly consistent read per second (or two eventually consistent reads per second) of items of up to 4KB in size.
A unit of Write Capacity enables you to perform one write per second for items of up to 1KB in size.
- set linux permissions on EFS volume: chmod & chown
- mount the volume with ‘mount -t nfs -o xxxx ‘
- configure security group to allow traffic on port 2049 to EFS AND to EC2.
SSH: port 22 HTTP: port 80 HTTPS: port 443 FTP: port 21 MySQL: 3306 RDP: 3389 SQL Server: 1433
It is necessary to set up the bi-directional network permissions, normally with Security Groups. You will connect the EFS Target to your EC2 instance with a ‘mount’ statement. You do not need to stipulate the size or format the volume. AWS provide a nominally unlimited file system ready for you to use. As normal under the shared security model AWS will ensure that the EFS system is secure, but you are responsible for the access control security inside the EFS file space provided to you.
Security groups are stateful, and also consolidate rules.
ACL is NOT stateful (have to configure both in and out), and also goes in order of rule number.
When editing permissions (policies and ACLs), to whom does the concept of the “Owner” refer? The “Owner” refers to the identity and email address used to create the account AWS account.
For software licenses tied to physical cores and sockets, use dedicated hosts or bare metal instances.
Master node controls and directs cluster (ends cluster if terminated)
Core node processes and store data using HDFS (risk of data loss if terminated)
Task nodes process data but do not hold persistent data - add spot capacity here.
Route 53 Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.
Amazon Route 53 does not have a default TTL for any record type
An application uses Geo Location Based Routing on Route 53.
Route 53 receives a DNS Query and it is unable to detect requester’s Geo location. Default location is returned if default record is configured. otherwise, no answer response is returned. Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost effective way to route end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.
When using Alias Resource Record Set, Amazon Route 53 uses the CloudFront, Elastic Beanstalk, Elastic Load Balancing, or Amazon S3 TTLs
Health Check needs to be configured for Route 53 to become aware of application down scenarios. It will then act on the routing configuration specified
To point Zone Apex Record to another AWS supported end point, you need to Alias resource record set.
Aurora Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.
Aurora automatically replicates data 6 ways across three different availability zones. For other database engines in RDS, to replicate data to a different AZ, you would need to enable multi-az deployment to setup a standby instance in a different availability zone
You have an application that receives traffic only during certain times of an year. Rest of the times, it sees very little traffic.
Aurora Serverless has a pause and resume capability to automatically stop the database compute capacity after a specified period of inactivity. When paused, you are charged only for storage. It automatically resumes when new database connections are requested
In Aurora, Read Replica is promoted as a primary during a primary instance failure. If you do not have an Aurora Read Replica, then Aurora would launch a new instance and promote it to primary. In other RDS products, you would need to use a multi-AZ deployment to configure a standby instance
Aurora supports MySQL or PostgreSQL compatibility when launching an Aurora database. This allows existing tools and clients to connect to Aurora without requiring modification
You have configured an Aurora database with five read replicas. What is the recommended mechanism for clients to connect to read replicas? Each Aurora DB cluster has a reader endpoint. If there is more than one Aurora Replica, the reader endpoint directs each connection request to one of the Aurora Replicas.
You would like to automatically replace instances that are not healthy due to underlying infrastructure or common guest OS related issues. Autoscaling automatically does this.
Data security is the responsibility of the customer. AWS provides capabilities to manage data security; however, it is upto the customer to take advantage of security capabilities based on their individual needs. Physical infrastructure, Facilities, Host Computers, Network infrastructure are all responsibilities of AWS
Recovery Point Objective indicates acceptable amount of data loss measured in time. If disaster strikes at time T, if RPO is 2 hours, then you have process and procedures in place to restore the systems as it appeared at T-2.
Recovery Time Objective captures time it takes to restore business processes to an acceptable level after a disaster.
AWS Five Pillars
- Operational Excellence
- ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures.
- ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies.
- ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues.
- Performance Efficiency
- ability to use computing resources efficiently to meet system requirements and to maintain that efficiency as demand changes and technologies evolve.
- Cost Optimisation
- ability to avoid or eliminate unneeded cost or suboptimal resources.
AWS Shared Responsibility Model
AWS responsibility “Security of the Cloud” - AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services.
Customer responsibility “Security in the Cloud” – Customer responsibility will be determined by the AWS Cloud services that a customer selects. This determines the amount of configuration work the customer must perform as part of their security responsibilities. For example, a service such as Amazon Elastic Compute Cloud (Amazon EC2) is categorized as Infrastructure as a Service (IaaS) and, as such, requires the customer to perform all of the necessary security configuration and management tasks. Customers that deploy an Amazon EC2 instance are responsible for management of the guest operating system (including updates and security patches), any application software or utilities installed by the customer on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance. For abstracted services, such as Amazon S3 and Amazon DynamoDB, AWS operates the infrastructure layer, the operating system, and platforms, and customers access the endpoints to store and retrieve data. Customers are responsible for managing their data (including encryption options), classifying their assets, and using IAM tools to apply the appropriate permissions.
In regards to EC2 which of the following is not a customers responsibility under the shared responsibility model? Decommissioning and destruction of storage media.
Four levels of AWS premium support
Basic, Developer, Business, Enterprise
- Maximum response time for a Business Level Premium Support Case: 1 hour
Credit Card Payments
- standardized architecture for Payment Card Industry (PCI) Data Security Standard (DSS) compliance.
- As the AWS platform is PCI DSS 1.0 compliant, I can immediately deploy a website to it that can take and store credit card details. I do not need to get any kind of delta accreditation from a QSA. FALSE
- A Qualified Security Assessor (QSA) is a person who has been certified by the PCI Security Standards Council to audit merchants for Payment Card Industry Data Security Standard (PCI DSS) compliance.
14 Regions currently
The AWS platform does not provide you much protection against social engineering attacks the rest of the attacks (Man in the middle, IP Spoofing, Port Scanning) it does provide you protection against.
After establishing a Direct-Connect service between your VPC and their on-premise network, and confirming all the routing, firewalls, and authentication, you find that while you can resolve names against their DNS, the other company services is unable to resolve names against your DNS servers.
Route53 has a security feature that prevents internal DNS from being read by external sources. The work around is to create a EC2 hosted DNS instance that does zone transfers from the internal DNS, and allows itself to be queried by external servers.
You have been engaged by a company to design and lead a migration to an AWS environment. The team is concerned about the capabilities of the new environment, especially when it comes to avoiding bottlenecks. The design calls for about 20 instances (C3.2xLarge) pulling jobs/messages from SQS. Network traffic per instance is estimated to be around 500 Mbps at the beginning and end of each job. Which network configuration should you plan on deploying?
When considering network traffic, you need to understand the difference between storage traffic and general network traffic, and the ways to address each. The 10Gbps is a red-herring, in that the 500Mbps only occurs for short intervals, and therefore your sustained throughput is not 10Gpbs. Whereever possible, use simple solutions such as spreading the load out rather than expensive high tech solutions.
- Spread the instances over multiple AZs to minimise traffic concentration and maiximise fault tolerance.