[ S3 Access Logs ]

- For audit(품질검사) purpose, you may want to log all access to S3 buckets

- Any request made to S3, from any account, authorized or denied, will be logged into another S3 bucket

- That data can be analyzed using data analysis tools

- Or Amazon Athena as we'll see later in this section

 

# Warning

로깅 버킷을 로깅 모니터링 대상으로 두면 로깅 루프가 되어 버킷 사이즈가 기하급수적으로 커짐

* Do not set your logging bucket to be the monitored bucket

  It will create logging loop, and your bucket will grow in size exponentially(기하급수적으로)

 

 

[ S3 Replication ]

- Must enable versioning in source and destination

- Cross Region Replication (CRR)

- Same Region Replication (SRR)

- Buckets can be in different accounts

- Copying is asynchronous

- Must give proper IAM permissions to S3

- CRR Use cases : compliance, lower latency access, replication across accounts

- SRR Use cases : log aggregation, live replication between production and test accounts

 

- After activating, only new objects are replicated (not retroactive)

- For Delete operations :  any delete operation is not replicated

  If you delete without a version ID, it adds a delete marker, not replicated

  If you delete with a version ID, it deletes in the source, not replicated

- There is no "chaining" of replication

  If bucket I has replication into bucket 2, wich has replication into bucket 3

  Then objects created in bucket 1 are not replicated to bucket 3

 

 

[ S3 Pre-signed URLs ]

- Can generate pre-signed URLs using SDK or CLI

  for downloads (easy, can use the CLI)

  for uploads (harder, must use the SDK)

- Valid for a default of 3600 seconds, can change timeout with --expires-in [TIME_BY_SECONDS] argument

- Users given a pre-signed URL inherit the permissions of the person who generated the URL for GET/PUT

Examples :

  1) Allow only logged-in users to download a premium video on your S3 bucket

  2) Allow an ever changing list of users to download files by generating URLs dynamically

  3) Allow temporarily a user to upload a file to a precies location in our bucket

 

 

 

 

 

 

 

반응형

[ S3 MFA-DELETE ]

bucket 파일 삭제를 보호하기 위한 MFA (QR 코드 인증 등 2차 인증) 사용

MFA delete 는 CLI 에서만 설정이 가능

MFA delete 설정시 파일 영구 삭제 MFA 인증이 있어야 가능

일반적인 삭제는 가능하나 삭제된 이력을 삭제(영구삭제)할 수 없음

- MFA (multi factor authentication) forces user to generate a code on a device (usually a mobile phone or hardware) before doing important operations on S3

- To use MFA-Delete, enable versioning on the S3 bucket

- You will need MFA to permanently delete an object version, suspend versioning on the bucket

- You won't need MFA for enabling versioning listing deleted versions

- Only the bucket owner (root account) can enable/disable MFA-DELETE

- MFA-Delete currently can only be enabled using the CLI

 

 

반응형

[ AWS CLI Configuration ] 

properly configure the CLI

1. Bad way

User 의 security credential 정보 (access key id/secret access key)를 사용하여(aws configure 명령어를 통해) EC2 에 인증 및 사용하는 방법은 보안에 취약하므로 로컬 및 사내망이 아닌 경우 지양

- We could run 'aws configure' on EC2.

- This ways is super insecure, never put your personal credentials on an EC2

- your personal credentials are personal and only belong on your personal computer

- If the EC2 is compromised, so is your personal account

- If the EC2 is shared, other people may perform AWS actions while impersonating you

 

> aws configure

> user 의 access key id 입력

> user 의 secret access key 입력

> region name 입력

> cat ~/.aws/credentials 로 로그인한 계정의 정보(access key id/secret access key)를 열람 할 수 있음 (보안에 취약)

 

2. Right way

IAM Role 과 policy를 설정하여 EC2 인스턴스에 인증하는 방식을 사용

- IAM Roles can be attached to EC2 instances

- IAM Roles can come with a policy authorizing exactly what the EC2 instance should be able to do

- EC2 Instances can the use these profiles automatically without any additional configurations

 

* JSON generator(설정권한 등을 UI로 확인 및 선택 가능) 를 사용하여 IAM JSON 을 쉽게 생성 할 수 있음

* Simulator 를 사용하여 설정한 IAM Role/policy에 대한 테스트가 가능

 

[ AWS EC2 Instance Metadata ]

CLI 에서 curl http://169.254.169.254/latest/meta-data 을 통해 메타데이터 정보를 가져올 수 있음

- AWS EC2 Instance Metadata is powerful but one of the least known features to developers

- It allows AWS EC2 instances to "learn about themselves" without using an IAM Role for the purpose

- The URL is http://169.254.169.254/latest/meta-data

- You can retrieve the IAM Role name from the metadata, but you cannot retrieve the IAM Policy

  Metadata = Info about the EC2 instance

  Userdata = launch script of the EC2 instance

ex) 1. curl http://169.254.169.254/latest/meta-data/hostname

     2. curl http://169.254.169.254/latest/meta-data/iam/security-credentials/{EC2RoleName}

 

 

[ AWS SDK ]

- What if you want to perform actions on AWS directly from your applications code? (without using CLI)

- You can use an SDK (software development kit)

- Official SDKs are Java/.NET/Node.js/PHP/Python etc.

- We have to use the AWS SDK when coding against AWS Services such as DynamoDB

- AWS CLI uses the Python SDK(boto3)

* If you don't specify or configure a default region, then us-east-1 will be chosen by deafult

 

- It's recommend to use the default credential provider chain

- The default credential provider chain works seamlessly with:

  AWS credentials at ~/.aws/credentials (only on our computers or on premise)

  Instance Profile Credentials using IAM Roles (for EC2 machines, etc..)

  Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

- Overall, Never Ever Store AWS Credentials in your code.

 

# Exponential Backoff

- Any API that fails because of too many calls needs to be retried with Exponential Backoff

- These apply to rate limited API

- Retry mechanism included in SDK API calls

 

 

반응형

[ S3 Websites ]

- S3 can host static websites and have them accessible on the www

- The website URL will be :

  {bucket-name}.s3-website-{AWS-region}.amazonaws.com

  OR

  {bucket-name}.s3-website.{AWS-region}.amazonaws.com

- If you get a 403 (forbidden) error, make sure the bucket policy allows public reads

 

[ # CORS ]

- An origin is a scheme (protocol), host (domain) and port

- CORS means Cross-Origin Resource Sharing

- Web Browser based mechanism to allow requests to other origins while visiting the main origin

   Same origin : http://example.com/app1 & http://example.com/app2

   Different origins : http://www.example.com & http://other.example.com 

- The requests won't be fulfilled unless the other origin allows for the requests using CORS Headers(Access-Control-Allow-Origin)

 

[ S3 CORS *** ]

- If a client does a cross-origin request on our S3 bucket, we need to enable the correct CORS headers

- You can allow for a specific origin or for * (all origins)

 

[ Amazon S3 - Consistency Model ]

- Read after write consistency for PUTS of new objects

  1) As soon as a new object is written, we can retrieve it (ex: PUT 200 => GET 200)

  2) If we did a GET before to see if the object existed (ex: GET 404 => PUT 200 => GET 404) - eventually consistent

Eventual Consistency for DELETES and PUTS of existing objects

  1) If we read an object after updating, we might get the older version (ex: PUT 200 => PUT 200 => GET 200 (might be older version))

  2) If we delete an object, we might still be able to retrieve it for a short time (ex: DELETE 200 => GET 200)

* there's no way to request "strong consistency"

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 9-1. S3 MFA Delete  (0) 2021.04.03
[AWS] 8. AWS CLI : configuration  (0) 2021.04.01
[AWS] 7-2. S3 Security  (0) 2021.03.29
[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29

[ S3 Security ]

1) User based

- IAM policies - which API calls should be allowed for a specific user from IAM console

2) Resource Based

- Bucket Policies - bucket wide rules from the S3 console - allows cross account

- Object Access Control List (ACL) - finer grain

- Bucket Access Control List (ACL) - less common

 

* an IAM principal can access an S3 object if the user IAM permissions allow it OR the resource policy ALLOWS it

* AND there's no explicit DENY

 

 

[ S3 Bucket Policies ]

- JSON based policies

  Resources : buckets and objects

  Actions : Set of API to Allow or Deny

  Effect : Allow / Deny

  Principal : The account or user to apply the policy to

- Use S3 bucket for policy to :

  Grant public access to the bucket

  Force objects to be encrypted at upload

 

[ # Hands-on : Bucket Policies ]

Policy generator 사용

1) Policy Type 선택 : S3 Bucket Policy

2) Add statements

첫번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : Null

Key : s3:x-amz-server-side-encryption

value : true

 

2) Add statements

두번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : StringNotEquals

Key : s3:x-amz-server-side-encryption

value : AES256

 

4) Generate Policy 클릭시 JSON 생성됨

5) JSON copy&paste to Bucket policy 

 

* 위와같이 설정시 Object (file) 을 Encryption 설정(SSE-S3) 없이 업로드 할 경우, Access Denied 로 업로드 실패.

Policy generator 사용

1) Policy Type 선택 : S3 Bucket Policy

2) Add statements

첫번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : Null

Key : s3:x-amz-server-side-encryption

value : true

 

2) Add statements

두번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : StringNotEquals

Key : s3:x-amz-server-side-encryption

value : AES256

 

4) Generate Policy 클릭시 JSON 생성됨

5) JSON copy&paste to Bucket policy 

 

* 위와같이 설정시 Object (file) 을 Encryption 설정(SSE-S3) 없이 업로드 할 경우, Access Denied 로 업로드 실패.

 

[ Bucket settings for Block Public Access ]

- Block public access to buckets and objects granted through

  1) new access control lists (ACLs)

  2) any access control lists (ACLs)

  3) new public bucket or access point policies

account settings for Block Public Access 설정/Block public access 설정을 통해 모든 public의 bucket 접근 차단 가능

- Block public and cross-account access to buckets and objects through any public bucket or access point policies

* These settings were created to prevent company data leaks

- If you know your bucket should never be public, leave these on

- Can be set at the account level

 

 

[ S3 Security - Other ]

1) Networking :

  - Supports VPC Endpoints (for instances in VPC without www internet)

2) Logging and Audit :

  - S3 Access Logs can be stored in other S3 bucket

  - API calls can be logged in AWS CloudTrail

3) User Security :

  - MFA Delete : MFA (multi factor authentication) can be required in versioned buckets to delete objects

  - Pre-Signed URLs : URLs that are valid only for a limited time (ex: premium video service for logged in users)

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 8. AWS CLI : configuration  (0) 2021.04.01
[AWS] 7-3. S3 Websites : CORS, Eventual Consistency, Strong Consistency  (0) 2021.04.01
[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24

[ Amazon S3 - Buckets ]

- Amazon S3 allows people to store objects (files) in "buckets" (directories)

- Buckets must have a globally unique name

- Buckets are defined at the region level

- Naming convention

 1) No uppercase

 2) No underscore

 3) 3-63 characters long

 4) Not an IP

 5) Must start with lowercase letter or number 

* bucket name must be globally unique

* global console, region service

 

[ Amazon S3 - Objects ]

- Objects (files) have a key

- The key is the FULL path :

  s3://my-bucket/my_forder1/my_file.txt

- The key is composed of prefix(my_forder1/)+object name(my_file.txt)

- There is no concept of directories within buckets

- Object values are the content of the body :

   Max Object Size is 5TB

   If uploading more than 5GB, must use "multi-part upload"

- Metadata (list of text key/value pairs - system or user metadata)

- Tags (Unicode key/value pair - up to 10) - useful for security/lifecycle

- Version ID (if versioning is enabled)

 

[ Amazon S3 - Versioning ]

- You can version your files in Amazion S3

- It is enabled at the bucket level

- Same key overwrite will increment the version : 1,2,3..

- It is best practice to version your buckets

  Protect against unintended deletes

  Easy roll back to previous version

- Any file that is not versioned prior to enabling versioning will have version null (버져닝을 활성화 하기전의 버전은 null)

- Suspending(보류) versioning does not delete the previous versions

 

[ S3 Encryption for Objects ]

There are 4 methods of encrypting objects in S3

1) SSE-S3 : encrypts S3 objects using keys handled & managed by AWS

  - Object is encrypted server side

  - AES-256 encryption type

  - Must set header : "x-amz-server-side-encryption":"AES256"

2) SSE-KMS : leverage AWS key Management Service to manage encryption keys

  - encryption using keys handled & managed by KMS

  - KMS Advantages : user control + audit trail

  - Object is encrypted server side

  - Must set header : "x-amz-server-side-encryption":"aws:kms"

3) SSE-C : when you want to manage your own encryption keys

  - server-side encryption using data keys fully manged by the customer outside of AWS

  - Amazon S3 does not store the encryption key you provide

  - HTTPS must be used

  - Encryption key must provided in HTTP headers, for every HTTP request made

4) Client Side Encryption

  - Client library such as the Amazone S3 Encryption Client

  - Clients must encrypt data temselves before sending to S3

  - Clients must encrypt data temselves when retrieving from S3

  - Customer fully manages the keys and encryption cycle

 

# Encryption in transit (SSL/TLS)

- Amazon S3 exposes :

  HTTP endpoint : non encrypted

  HTTPS endpoint : encryption in flight

- You are free to use the endpoint you want, but HTTPS is recommended

- Most clients would use the HTTPS endpoint by default

* HTTPS is mandatory for SSE-C

 

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 7-3. S3 Websites : CORS, Eventual Consistency, Strong Consistency  (0) 2021.04.01
[AWS] 7-2. S3 Security  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23

EC2 Instance :

- Use a Golden AMI : Install your applications, OS dependencies etc. beforehand and launch your EC2 instance from the Golden AMI

- Bootstrap using User Data : For dynamic configuration, use User Data scripts

- Hybrid : mix Golden AMI and User Data (Elastic Beanstalk)

RDS Databases :

- Restore from a snapshot : the database will have schemas and data ready

EBS Volumes :

- Restore from a snapshot : the disk will already be formatted and have data

 

 

# Developer problems on AWS

- Managing infrastructure

- Deploying Code

- Configuring all the databases, load balancers, etc

- Scaling concerns

 

- Most web apps have the same architecture (ALB + ASG)

- All the developers wait is for their code to run

- Possibly, consistenly across different applications and environments

 

 

 

[ AWS ElasticBeanStalk ]

- ElasticBeanStalk is a developer centric view of deploying an application on AWS

- It uses all the component's we've seen before : EC2, ASG, ELB, RDS, etc...

- But it's all in one view that's easy to make sense of

- We still have full control over the configuration

- BeanStalk is free but you pay for the underlying instances

- Managed service

  -- Instance configuration/OS is handled by beanstalk

  -- Deployment strategy is configurable but performed by ElasticBeanStalk

- Just the application code is the responsibility of the developer

- Three architecture models :

  1) Single Instance deployment : good for dev

  2) LB+ASG : great for production or pre-production web applications

  3) ASG only : great for non-web apps in production (workers, etc..)

- ElasticBeanStalk has three components

  1) Application

  2) Application version : each deployment gets assigned a version

  3) Environment name (dev/test/prod..) : free naming

- You deploy application versions to environments and can promote application versions to the next environment

- Rollback feature to previous application version

- Full control over lifecycle of environments

Support for many platforms : Go, java, java with tomcat, node.js phpm python, ruby, single/multi container docker, preconfigured docker... (If not supported, you can write your custom platform)

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 7-2. S3 Security  (0) 2021.03.29
[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23
[AWS] 4-2. Aurora  (0) 2021.03.23

[ Route53 ]

- Route53 is a Managed DNS (Domain Name System)

- DNS is a collection of rules and records which helps clients understand how to reach a server through its domain name

* You pay 0.5$ per month per hosted zone

- In AWS, the most common records are :

  1) A : host name to IPv4

  2) AAAA : hostname to IPv6

  3) CNAME : hostname to hostname

  4) Alias : hostname to AWS resource

- Route 53 can use :

  public domain names you own (or buy)

  private domain names that can be resolved by your instances in your VPCs.

 

- Route 53 has advanced features such as :

  Load balancing (through DNS - also called client load balancing)

  Health checks (although limited..)

  Routing policy : simple, failover, geolocation, latency, weighted, multi value 

 

[ DNS Records TTL (Time to Live) ]

Web browser cache 가 살아있는 시간.

Web browser 는 Route 53 에 DNS 요청을 하고 도메인에 해당하는 IP와 함께 TTL을 받아 TTL 시간동안 DNS 를 캐싱한다. TTL이 다 지날 경우 다시 DNS 요청을 하여 IP를 다시 캐싱한다.

TTL 시간이 길 수록 DNS 트래픽은 줄어들고 웹브라우저가 옛날 아이피로 요청할 가능성이 높아진다. (DNS 의 A record설정을 수정할 경우)

TTL 값은 필수 DNS record

- High TTL (eg. 24 hour)

  Less traffic on DNS, Possibly outdated records

- Low TTL (eg. 60 seconds)

  More traffic on DNS, Records are outdated for less time, Easy to change records

* TTL is mandatory for each DNS record

 

[ CNAME vs Alias ]

CNAME 은 도메인(호스트) 호출시 다른 호스트명으로 리다이렉트, 유료, 도메인은 사용 불가, 유료

Alias 는 도메인(호스트) 호출시 AWS 리소스로 리다이렉트, root 도메인도 사용 가능, 무료

CNAME :

- Points a hostname to any other hostname (app.mydomain.com > blabla.anything.com)

- only for Non Root domain (eg. something.mydomain.com)

- not free

Alias :

- Points a hostname to an AWS Resource (app.mydomain.com > blabla.amazonaws.com)

- Works for Root domain and non root domain (eg. mydomain.com)

- Free of charge

- Native health check

 

[ Simple Routing Policy ]

1개의 CNAME/Alias 에 1개의 A record 지정한 1:1 관계. health check 사용 불가

1개의 CNAME/Alias 에 2개 이상의 A record 가 지정되있을 경우 client 가 랜덤으로 IP 선택

- Use when you need to redirect to a single resource

- You can't attach health checks to simple routing policy

* If multiple values are returned, a random one is chosen by the client

 (=client side load balancing)

 

[ Weighted Routing Policy ]

A record 마다 가중치를 다르게 주어 트래픽을 분산하는 정책

- Control the % of the requests that go to specific endpoint

- Helpful to test 1% of traffic on new app version for example

- Helpful to split traffic between two regions

- Can be associated with Health Checks

 

[ Latency Routing Policy ]

최저응답시간을 갖는 A record 로 리다이렉트 시키는 정책

(eg. 한국/미국/영국 region 의 인스턴스를 latency routing policy 를 적용하여 하나의 CNAME 의 A record 로 지정한 후 서울에서 DNS 요청시 한국 A record 의 인스턴스가 응답함)

- Redirect to the server that has the least latency close to us

- Super helpful when latency of users is a priority

- Latency is evaluated in terms of user(사용자 측면에서) to designated(지정된) AWS Region (유저마다 최저응답시간을 갖는 호스트로 라우팅됨)

- Germany may be directed to the US (if that's the lowest latency)

 

[ Health Checks ]

설정한 Check Interval 의 수만큼 연속으로 instance (IP) 에 ping 을 날려 instance 의 상태를 파악

- Have 3 (default value is 3) health checks failed => unhealthy

- After 3 (default value is 3) health checks passed => health

- Default Health Check Interval : 30s (can set to 10s - higher cost)

- About 15 health checkers will check the endpoint health

   => one request every 2 seconds on average

- Can have HTTP, TCP and HTTPS health checks (no SSL verification)

- Possibility of integrating the health check with CloudWatch 

* Health checks can be linked to Route53 DNS queries

 

[ Failover Routing Policy ]

1. Web browser 가 Route53 에 DNS 요청

2. Route 53 은 primary instance에 Health check

3. Primary instance 가 unhealthy 할 경우 secondary instance (DR(disaster recovery)) 에 요청

 

[ Geolocation Routing Policy ]

지역설정을 하여 해당 지역에서 오는 request 는 특정 A record 의 instance 가 처리

지정하지 않은 지역으로부터 요청이 올 경우 default 로 설정해놓은 A record 의 instance 가 처리

- Different from Latency based

- This is routing based on user location

- Here we specify : traffic from the UK should go to this specific IP

* Should create a "default" policy (in case there's no match on location)

 

[ Multi Value Routing Policy (=client side load balancing) ]

동일한 DNS 에 A record 를 최대 8개 까지 설정

client 에서 Route 53 에 DNS 요청시 healthy 한 instance 만 response

client 는 healthy 한 instance 중에서 하나의 instance에 random 하게 요청

- Use when routing traffic to multiple resources

- Want to associate a Route 53 health checks with records

- Up to 8 healthy records are returned for each Multi Value query

* Multi Value is not a substitute for having an ELB

 

[ # Hands-on : Route53 에 record, health check 설정 방법 ]

1. health check 생성 (instance IP or Domain 입력)

2. Route 53 의 record 생성

- Name : sample.testaws.com  (sample 이 Record set 의 name 이자 domain 이 됨)

- Type : A record ( IPv4 )

- TTL : IP 유효시간 설정

- Value : Type의 value 로, A record 선택시 인스턴스의 IPv4 입력

- Routing Policy : simple(단일 A record), failover, geolocation, latency, weighted, multi value.. 선택

3. 선택한 record 의 Routing Policy 에 따라 Associate with Health check 옵션 Yes 로 선택 및 Health Check 선택

: 위와 같이 설정시 client 는 DNS 요청을 Route53 에 하며 health check 를 통해 주기적으로 ping 을 하여 IP의 instance 가 healthy/unhealty 한지 파악. 인스턴스의 상태에 따라 선택한 Routing Policy 에 따라 다르게 동작

 

 

[ Route 53 as a Registrar ]

Rregistrar 는 예약된 Internet domain names을 관리하는 조직

- A domain name registrar is a organization that manages the reservation of Internet domain names

(eg. Google Domains, and also Route53(AWS))

* Domain Registrar != DNS (but each domain registrar usually comes with some DNS features)

 

# 3rd Party Registrar with AWS Route 53

3rd Party 에서 AWS Route53 의 DNS 서버 사용하기

1) 3rd Party (ex: Google) 가 제공하는 Name Server 대신 Custom Name Server 를 사용하도록 설정

2) 이때 Custom Name Server 는 Route53 에서 생성한 Hosted Zone 의 Name Server 로 설정 (Hosted Zone 생성 후 Hosted Zone 클릭시 노출되는 Details 정보 안에 Name Server 정보가 있음)

- If you buy your domain on 3rd party website, you can still use Route53

1) Create a Hosted Zone in Route53

2) Update NS Records on 3rd party website to use Route53 name servers

 

 

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23
[AWS] 4-2. Aurora  (0) 2021.03.23
[AWS] 4-1. RDS, Read Replicas, DR  (0) 2021.03.22

[ AWS ElastiCache ]

- The same way RDS is to get managed Relational Databases

- ElastiCache is to get managed Redis or Memcached

- Caches are in-memory databases with really high performance, low latency

- Helps reduce load off of databases for read intensive workloads

- Helps make your application stateless

- Write Scaling using sharding (파편화)

- Read Scaling using Read Replicas

- Multi AZ with Failover Capability

- AWS takes care of OS maintenance/patching, optimizations, setup, configuration, monitoring, failure recovery and backups

 

[ ElastiCache Solution Architecture - DB Cache ]

app 은 elasticache 에 우선적으로 쿼리한 후 존재하지 않을 경우(miss) RDS 에서 SELECT, cache 에 write

다음번 동일한 데이터를 읽을 땐 캐시에 존재 (hit)]

 

[ ElastiCache Solution Architecture - User Session Store ]

앱에 로그인을 한 후 session data 를 Elasticache 에 저장.

유저가 다른 인스턴스에서 접속 할 경우 elasticache 에서 세션정보를 가져와 로그인 유지상태로 만듬. 

매번 인증이 필요없음.

 

[ Redis vs Memcached ]

* Redis (RDS와 비슷) 

 - Multi AZ with Auto-Failover

 - Read Replicas to scale reads and have high availability

 - Data Durability using AOF persistence

 - Backup and restore features

 

* Memcached

 - Multi-node for partitioning of data (sharding)

 - Non persistent

 - No backup and restore

 - Multi-threaded architecture

 

[ ElastiCache - Cache Security ]

1. All caches in ElastiCache :

  - Support SSL in flight encryption

  - Do not support IAM authentication *** 

  - IAM policies on ElastiCache are only used for AWS API-level security

2. Redis AUTH

  - You can set a pw/token when you create a Redis cluster

  - This is an extra level of security for your cache (on top of security groups)

3. Memcached

  - Supports SASL-based authentication (advanced)

 

[ # ElastiCache for Solutions Architects ] 

캐시데이터를 읽는 경우 캐시에 저장된 데이터는 방금 꺼내온 데이터가 아니므로 stale 함. (Lazy Loading)

DB에서 데이터를 쓸 경우 cache 에도 추가 및 수정을 한다 (Write Through)

Patterns for ElastiCache

- Lazy Loading : all the read data is cached, data can become stale(오래된) in cache

- Write Through : Adds or update data in the cache when written to a DB (no stale data)

- Session Store : store temporary session data in a cache (using TTL features)

 

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24
[AWS] 4-2. Aurora  (0) 2021.03.23
[AWS] 4-1. RDS, Read Replicas, DR  (0) 2021.03.22
[AWS] 3-2. EBS Snapshots, EFS, Instance Storage  (0) 2021.03.20

[ Aurora ]

- Aurora is a proprietary technology from AWS (not open sourced)

- Postgres and MySQL are both supported as Aurora DB (that means your drivers will work as if Aurora was a Postgres or MySQL database)

- Aurora is "AWS cloud optimized" and claims 5x performance improvement over MySQL on RDS, over 3x the - performance of Postgres on RDS

- Aurora storage automatically grows in increments of 10GB, up to 64TB

- Aurora can have 15 replicas while MySQL has 5, and the replication process is faster (sub 10ms replica lag)

- Failover in Aurora is instantaneous(즉각적인). It's High Availability native

- Aurora costs more than RDS(20% more) - but is more efficient

 

# Aurora High Availability and Read Scaling

- 6 copies of your data across 3 AZ :

  -- 4 copies out of 6 needed for writes

  -- 3 copies out of 6 need for reads

  -- Self healing with peer-to-peer replication

  -- Storage is striped across 100s of volumes

- One Aurora Instance takes writes (master)

- Automated failover for master in less than 30 seconds

- Master + up to 15 Aurora Read Replicas serve reads

- Support for Cross Region Replication

 

 

[ Aurora DB Cluster ]

쓰기(Master를 통해)와 읽기(Read Replicas를 통해)는 각각의 endpoint 를 통해 수행

[ Aurora Security ]

- Similar to RDS because uses the same engines

- Encryption at rest using KMS

- Automated backups, snapshots and replicas are also encrypted

- Encryption in flight using SSL (same process as MySQL or Postgres)

- Possibility to authenticate using IAM token (same method as RDS)

- You are responsible for protecting the instance with security groups

- You can't SSH

 

[ Aurora Serverless ]

로드량이 많아지면 Aurora database 가 추가적으로 생성됨. 반대로 적어지면 줄어듬

- Automated database instantiation and auto-scaling based on actual usage

- Good for infrequent, intermittent(간헐적인) or unpredictable workloads

- No capacity planning needed

- Pay per second, can be more cost-effective

 

[ Global Aurora ]

1개의 마스터 region, 최대 5개의 서브 region, region 당 최대 16개의 read replicas

- Aurora Cross Region Read Replicas :

  Useful for disaster recovery

  Simple to put in place

- Aurora Global Database (recommended) :

  1 Primary Region (read/write)

  Up to 5 secondary (read-only) regions, replication lag is less then 1 second

  Up to 16 Read Replicas per secondary region

  Helps for decreasing latency

  Promoting another region (for disaster recovery) has an RTO(recovery time object) of < 1 minute

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 5-1. Route 53  (0) 2021.03.24
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23
[AWS] 4-1. RDS, Read Replicas, DR  (0) 2021.03.22
[AWS] 3-2. EBS Snapshots, EFS, Instance Storage  (0) 2021.03.20
[AWS] 3-1. EBS  (0) 2021.03.19

+ Recent posts