[ S3 Performance ]

오토스케일링이 되며, request 는 prefix 마다 받을 수 있는 양이 있으므로 prefix 를 늘려 성능을 향상시킬 수 있음

- Amazon S3 automatically scales to high request rates, latency 100-200ms

- Your application can achieve at lease 3500 PUT/COPY/POST/DELETE and 5500 GET/HEAD requests per second per prefix in a bucket

- There are no limits to the number of prefixes in a bucket

- prefix Example (object path -> prefix) :

  1) bucket/forder1/sub1/file -> prefix : /folder1/sub1/

  2) bucket/forder1/sub2/file -> prefix : /folder1/sub2/

  3) bucket/1/file -> prefix : /2/

  4) bucket/2/file -> prefix : /2/

 

* If you spread reads across all four prefixes evenly, you can achieve 22000 requests per second for GET and HEAD

 

[ S3 KMS Limitation ]

SSE-KMS encryption 사용시 KMS 암복호화로 인해 성능에 문제가 생길 수 있음

- If you use SSE-KMS, you may be impacted(영향받는) by the KMS limits

- When you upload, it calls the GenerateDataKey KMS API

- When you download, it calls the Decrypt KMS API

- Count towards the KMS quota per second (5500, 10000, 30000 req/s based on region)

- As of today, you cannot request a quota increase for KMS

 

[ S3 Performance ]

1 UPLOAD

1) Multi-part upload

- recommended for files > 100MB

- must use for files > 5GB

- Can help parallelize uploads (speed up transfers)

 

2) S3 Transfer Acceleration (upload only)

- Increase transfer speed by transferring file to an AWS edge location which will forward the data to the S3 bucket in the target region

- Compatible with multi-part upload

2 DOWNLOAD :

1) S3 Byte-Range Fetches

- Paralleize GETs by requesting specific byte ranges

- Better resilience in case of failures

- Can be used to speed up downloads

- Can be used to retrieve only partial data (for example the head of a file)

 

[ S3 Select & Glacier Seletct ]

S3 서버사이드 필터링으로 고성능

- Retrieve less data using SQL by performing server side filtering

- Can filter by rows & columns (simple SQL statements)

- Less network transfer, less CPU cost client-side

 

[ S3 Event Notifications ]

S3 이벤트 발생시 SNS, SQS, Lambda function 등의 노티를 받을 수 있음

bucket versioning 을 활성화 시켜야 함

- ObjectCreated, ObjectRemoved, ObjectRestore, Replication...

- Object name filtering possible (ex: *.jpg)

  ex: generate thumbnails of images uploaded to S3

- Can create as many "S3 events" as desired

- can email/notification, add message into a queue, call Lambda Functions to generate some custom code

 

- S3 event notifications typically deliver events in seconds but can sometimes take a minute or longer

- If two writes are made to a single non-versioned object at the same time, it is possible that only a single event notification will be sent

- If you want to ensure that an event notification is sent for every successful write, you should enable versioning on you bucket

 

[ AWS Athena ]

S3 Bucket 에 file 을 두고 sql 로 직접 조회/분석이 가능

Serverless service to perform analytics directly against S3 files

- Uses SQL language to query the files

- Has a JDBC/ODBC driver

- Charged per query and amount of data scanned

- Supports CSV, JSON, ORC, Avro, and Parquet (built on Presto)

  Use cases: Business intelligence/analytics/reporting, analyze & query VPC Flow Logs, ELB Logs, CloudTrail trails...

* to Analyze data directly on S3, use Athena

 

[ S3 Object Lock & Glacier Vault Lock ]

1) S3 Object Lock : 정해진 시간동안 LOCK

Adopt a WORM (Write Once Read Many) model

Block an object version deletion for a specified amount of time

2) Glacier Vault Lock : 한번 설정시 파일 수정/삭제 절대불가

Adopt a WORM model

Lock the policy for future edits (can no longer be changed)

Helpful for compliance and data retention(보유)

 

 

 

 

반응형

[ S3 Storage Classes ]

1. Amazon S3 Standard (General Purpose)

 - High durability of objects across multiple AZs

 - Sustain 2 concurrent facility failures

 - eg. Big Data analytics, mobile&gaming applications, content distribution

 

2. Amazon S3 Intelligent Tiering

저지연, 고성능, 모니터링 비용 부과

 - High durability of objects across multiple AZs

 - Same low latency and high throughput performance of S3 Standard

 - Small monthly monitoring and auto-tiering fee

 - Automatically moves objects between two access tiers based on changing access patterns

 - Resilient against events that impact an entire AZ

 

3. Amazon S3 Standard-IA (Infrequent Access)

자주 접근하지 않는 데이터에 적합, 고성능, 저렴

 - High durability of objects across multiple AZs

 - Suitable for data that is less frequently accessed, but requires rapid access when needed

 - Low cost compared to Amazon S3 Standard

 - Sustain 2 concurrent facility failures

 - eg. As a data store for DR, backups

 

4. Amazon S3 One Zone-IA (Infrequent Access)

Standard-IA 보다 고성능, 더 저렴, 단일 AZ로 DR 불가

 - Same as IA but data is stored in a single AZ

 - data lost when AZ is destroyed

 - Low latency and high throughput performance

 - Supports SSL for data at transit and encryption at rest

 - Low cost compared to IA (20%)

 - eg. Storing secondary backup copies of on-premise data, or storing data you can recreate (thumnail)

 

5. Amazon Glacier

저장직후 단시간동안 접근이 불가하며 오래동안 데이터를 보관하기 용이, 저렴, 마그네틱 저장소의 대체재

 - Low cost object storage meant for archiving/backup

 - Data is retained for the longer term

 - Alternative on-premise magnetic tape storage

 - Cost per storage per month + retrieval cost

 - Each item in Glacier is called "Archive" (not object)

 - Archives are stored in "Vaults" (not bucket)

 - 3 retrieval options :

   1) Expedited (1 to 5 minutes)

   2) Standard (3 to 5 hours)

   3) Bulk (5 to 12 hours)

   * Minimum storage duration of 90 days

 

6. Amazon Glacier Deep Archive

Amazon Glacier 보다 더 저렴, 저장직후 더오랜 시간 접근이 불가

 - Amazon Glacier Deep Archive - for long term storage - cheaper :

  1) Standard (12hours)

  2) Bulk (48hours)

  * Minimum storage duration of 180 days

 

7. Amazon S3 Reduced Redundancy Storage (deprecated/omitted)

 

 

# Moving between storage classes

- You can transition objects between storage classes

- For infrequently accessed object, move them to STANDARD_IA

- For archive objects you don't need in real-time, GLACIER or DEEP_ARCHIVE

- Moving objects can be automated using a lifecycle configuration

 

[ S3 Lifecycle Rules ]

- Transition actions : It defines when objects are transitioned to another storage class

  1) Move objects to Standard IA class 60 days after creation

  2) Move to Clacier for archiving after 6 months

- Expiration actions: configure objects to expire (delete) after some time

  1) Access log files can be set to delete after a 365 days

  2) Can be used to delete old versions of files (if versioning is enabled)

  3) Can be used to delete incomplete multi-part uploads

- Rules can be created for a certain prefix (ex: s3://mybucket/mp3/*)

- Rules can be created for certain objects tags (ex: Department: Finance)

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 10-1. AWS CloudFront  (0) 2021.04.11
[AWS] 9-4. S3 Performance  (0) 2021.04.10
[AWS] 9-2. S3 Access Logs, S3 Replication  (0) 2021.04.04
[AWS] 9-1. S3 MFA Delete  (0) 2021.04.03
[AWS] 8. AWS CLI : configuration  (0) 2021.04.01

[ S3 Access Logs ]

- For audit(품질검사) purpose, you may want to log all access to S3 buckets

- Any request made to S3, from any account, authorized or denied, will be logged into another S3 bucket

- That data can be analyzed using data analysis tools

- Or Amazon Athena as we'll see later in this section

 

# Warning

로깅 버킷을 로깅 모니터링 대상으로 두면 로깅 루프가 되어 버킷 사이즈가 기하급수적으로 커짐

* Do not set your logging bucket to be the monitored bucket

  It will create logging loop, and your bucket will grow in size exponentially(기하급수적으로)

 

 

[ S3 Replication ]

- Must enable versioning in source and destination

- Cross Region Replication (CRR)

- Same Region Replication (SRR)

- Buckets can be in different accounts

- Copying is asynchronous

- Must give proper IAM permissions to S3

- CRR Use cases : compliance, lower latency access, replication across accounts

- SRR Use cases : log aggregation, live replication between production and test accounts

 

- After activating, only new objects are replicated (not retroactive)

- For Delete operations :  any delete operation is not replicated

  If you delete without a version ID, it adds a delete marker, not replicated

  If you delete with a version ID, it deletes in the source, not replicated

- There is no "chaining" of replication

  If bucket I has replication into bucket 2, wich has replication into bucket 3

  Then objects created in bucket 1 are not replicated to bucket 3

 

 

[ S3 Pre-signed URLs ]

- Can generate pre-signed URLs using SDK or CLI

  for downloads (easy, can use the CLI)

  for uploads (harder, must use the SDK)

- Valid for a default of 3600 seconds, can change timeout with --expires-in [TIME_BY_SECONDS] argument

- Users given a pre-signed URL inherit the permissions of the person who generated the URL for GET/PUT

Examples :

  1) Allow only logged-in users to download a premium video on your S3 bucket

  2) Allow an ever changing list of users to download files by generating URLs dynamically

  3) Allow temporarily a user to upload a file to a precies location in our bucket

 

 

 

 

 

 

 

반응형

[ S3 MFA-DELETE ]

bucket 파일 삭제를 보호하기 위한 MFA (QR 코드 인증 등 2차 인증) 사용

MFA delete 는 CLI 에서만 설정이 가능

MFA delete 설정시 파일 영구 삭제 MFA 인증이 있어야 가능

일반적인 삭제는 가능하나 삭제된 이력을 삭제(영구삭제)할 수 없음

- MFA (multi factor authentication) forces user to generate a code on a device (usually a mobile phone or hardware) before doing important operations on S3

- To use MFA-Delete, enable versioning on the S3 bucket

- You will need MFA to permanently delete an object version, suspend versioning on the bucket

- You won't need MFA for enabling versioning listing deleted versions

- Only the bucket owner (root account) can enable/disable MFA-DELETE

- MFA-Delete currently can only be enabled using the CLI

 

 

반응형

[ AWS CLI Configuration ] 

properly configure the CLI

1. Bad way

User 의 security credential 정보 (access key id/secret access key)를 사용하여(aws configure 명령어를 통해) EC2 에 인증 및 사용하는 방법은 보안에 취약하므로 로컬 및 사내망이 아닌 경우 지양

- We could run 'aws configure' on EC2.

- This ways is super insecure, never put your personal credentials on an EC2

- your personal credentials are personal and only belong on your personal computer

- If the EC2 is compromised, so is your personal account

- If the EC2 is shared, other people may perform AWS actions while impersonating you

 

> aws configure

> user 의 access key id 입력

> user 의 secret access key 입력

> region name 입력

> cat ~/.aws/credentials 로 로그인한 계정의 정보(access key id/secret access key)를 열람 할 수 있음 (보안에 취약)

 

2. Right way

IAM Role 과 policy를 설정하여 EC2 인스턴스에 인증하는 방식을 사용

- IAM Roles can be attached to EC2 instances

- IAM Roles can come with a policy authorizing exactly what the EC2 instance should be able to do

- EC2 Instances can the use these profiles automatically without any additional configurations

 

* JSON generator(설정권한 등을 UI로 확인 및 선택 가능) 를 사용하여 IAM JSON 을 쉽게 생성 할 수 있음

* Simulator 를 사용하여 설정한 IAM Role/policy에 대한 테스트가 가능

 

[ AWS EC2 Instance Metadata ]

CLI 에서 curl http://169.254.169.254/latest/meta-data 을 통해 메타데이터 정보를 가져올 수 있음

- AWS EC2 Instance Metadata is powerful but one of the least known features to developers

- It allows AWS EC2 instances to "learn about themselves" without using an IAM Role for the purpose

- The URL is http://169.254.169.254/latest/meta-data

- You can retrieve the IAM Role name from the metadata, but you cannot retrieve the IAM Policy

  Metadata = Info about the EC2 instance

  Userdata = launch script of the EC2 instance

ex) 1. curl http://169.254.169.254/latest/meta-data/hostname

     2. curl http://169.254.169.254/latest/meta-data/iam/security-credentials/{EC2RoleName}

 

 

[ AWS SDK ]

- What if you want to perform actions on AWS directly from your applications code? (without using CLI)

- You can use an SDK (software development kit)

- Official SDKs are Java/.NET/Node.js/PHP/Python etc.

- We have to use the AWS SDK when coding against AWS Services such as DynamoDB

- AWS CLI uses the Python SDK(boto3)

* If you don't specify or configure a default region, then us-east-1 will be chosen by deafult

 

- It's recommend to use the default credential provider chain

- The default credential provider chain works seamlessly with:

  AWS credentials at ~/.aws/credentials (only on our computers or on premise)

  Instance Profile Credentials using IAM Roles (for EC2 machines, etc..)

  Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

- Overall, Never Ever Store AWS Credentials in your code.

 

# Exponential Backoff

- Any API that fails because of too many calls needs to be retried with Exponential Backoff

- These apply to rate limited API

- Retry mechanism included in SDK API calls

 

 

반응형

[ S3 Websites ]

- S3 can host static websites and have them accessible on the www

- The website URL will be :

  {bucket-name}.s3-website-{AWS-region}.amazonaws.com

  OR

  {bucket-name}.s3-website.{AWS-region}.amazonaws.com

- If you get a 403 (forbidden) error, make sure the bucket policy allows public reads

 

[ # CORS ]

- An origin is a scheme (protocol), host (domain) and port

- CORS means Cross-Origin Resource Sharing

- Web Browser based mechanism to allow requests to other origins while visiting the main origin

   Same origin : http://example.com/app1 & http://example.com/app2

   Different origins : http://www.example.com & http://other.example.com 

- The requests won't be fulfilled unless the other origin allows for the requests using CORS Headers(Access-Control-Allow-Origin)

 

[ S3 CORS *** ]

- If a client does a cross-origin request on our S3 bucket, we need to enable the correct CORS headers

- You can allow for a specific origin or for * (all origins)

 

[ Amazon S3 - Consistency Model ]

- Read after write consistency for PUTS of new objects

  1) As soon as a new object is written, we can retrieve it (ex: PUT 200 => GET 200)

  2) If we did a GET before to see if the object existed (ex: GET 404 => PUT 200 => GET 404) - eventually consistent

Eventual Consistency for DELETES and PUTS of existing objects

  1) If we read an object after updating, we might get the older version (ex: PUT 200 => PUT 200 => GET 200 (might be older version))

  2) If we delete an object, we might still be able to retrieve it for a short time (ex: DELETE 200 => GET 200)

* there's no way to request "strong consistency"

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 9-1. S3 MFA Delete  (0) 2021.04.03
[AWS] 8. AWS CLI : configuration  (0) 2021.04.01
[AWS] 7-2. S3 Security  (0) 2021.03.29
[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29

[ S3 Security ]

1) User based

- IAM policies - which API calls should be allowed for a specific user from IAM console

2) Resource Based

- Bucket Policies - bucket wide rules from the S3 console - allows cross account

- Object Access Control List (ACL) - finer grain

- Bucket Access Control List (ACL) - less common

 

* an IAM principal can access an S3 object if the user IAM permissions allow it OR the resource policy ALLOWS it

* AND there's no explicit DENY

 

 

[ S3 Bucket Policies ]

- JSON based policies

  Resources : buckets and objects

  Actions : Set of API to Allow or Deny

  Effect : Allow / Deny

  Principal : The account or user to apply the policy to

- Use S3 bucket for policy to :

  Grant public access to the bucket

  Force objects to be encrypted at upload

 

[ # Hands-on : Bucket Policies ]

Policy generator 사용

1) Policy Type 선택 : S3 Bucket Policy

2) Add statements

첫번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : Null

Key : s3:x-amz-server-side-encryption

value : true

 

2) Add statements

두번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : StringNotEquals

Key : s3:x-amz-server-side-encryption

value : AES256

 

4) Generate Policy 클릭시 JSON 생성됨

5) JSON copy&paste to Bucket policy 

 

* 위와같이 설정시 Object (file) 을 Encryption 설정(SSE-S3) 없이 업로드 할 경우, Access Denied 로 업로드 실패.

Policy generator 사용

1) Policy Type 선택 : S3 Bucket Policy

2) Add statements

첫번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : Null

Key : s3:x-amz-server-side-encryption

value : true

 

2) Add statements

두번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : StringNotEquals

Key : s3:x-amz-server-side-encryption

value : AES256

 

4) Generate Policy 클릭시 JSON 생성됨

5) JSON copy&paste to Bucket policy 

 

* 위와같이 설정시 Object (file) 을 Encryption 설정(SSE-S3) 없이 업로드 할 경우, Access Denied 로 업로드 실패.

 

[ Bucket settings for Block Public Access ]

- Block public access to buckets and objects granted through

  1) new access control lists (ACLs)

  2) any access control lists (ACLs)

  3) new public bucket or access point policies

account settings for Block Public Access 설정/Block public access 설정을 통해 모든 public의 bucket 접근 차단 가능

- Block public and cross-account access to buckets and objects through any public bucket or access point policies

* These settings were created to prevent company data leaks

- If you know your bucket should never be public, leave these on

- Can be set at the account level

 

 

[ S3 Security - Other ]

1) Networking :

  - Supports VPC Endpoints (for instances in VPC without www internet)

2) Logging and Audit :

  - S3 Access Logs can be stored in other S3 bucket

  - API calls can be logged in AWS CloudTrail

3) User Security :

  - MFA Delete : MFA (multi factor authentication) can be required in versioned buckets to delete objects

  - Pre-Signed URLs : URLs that are valid only for a limited time (ex: premium video service for logged in users)

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 8. AWS CLI : configuration  (0) 2021.04.01
[AWS] 7-3. S3 Websites : CORS, Eventual Consistency, Strong Consistency  (0) 2021.04.01
[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24

[ Amazon S3 - Buckets ]

- Amazon S3 allows people to store objects (files) in "buckets" (directories)

- Buckets must have a globally unique name

- Buckets are defined at the region level

- Naming convention

 1) No uppercase

 2) No underscore

 3) 3-63 characters long

 4) Not an IP

 5) Must start with lowercase letter or number 

* bucket name must be globally unique

* global console, region service

 

[ Amazon S3 - Objects ]

- Objects (files) have a key

- The key is the FULL path :

  s3://my-bucket/my_forder1/my_file.txt

- The key is composed of prefix(my_forder1/)+object name(my_file.txt)

- There is no concept of directories within buckets

- Object values are the content of the body :

   Max Object Size is 5TB

   If uploading more than 5GB, must use "multi-part upload"

- Metadata (list of text key/value pairs - system or user metadata)

- Tags (Unicode key/value pair - up to 10) - useful for security/lifecycle

- Version ID (if versioning is enabled)

 

[ Amazon S3 - Versioning ]

- You can version your files in Amazion S3

- It is enabled at the bucket level

- Same key overwrite will increment the version : 1,2,3..

- It is best practice to version your buckets

  Protect against unintended deletes

  Easy roll back to previous version

- Any file that is not versioned prior to enabling versioning will have version null (버져닝을 활성화 하기전의 버전은 null)

- Suspending(보류) versioning does not delete the previous versions

 

[ S3 Encryption for Objects ]

There are 4 methods of encrypting objects in S3

1) SSE-S3 : encrypts S3 objects using keys handled & managed by AWS

  - Object is encrypted server side

  - AES-256 encryption type

  - Must set header : "x-amz-server-side-encryption":"AES256"

2) SSE-KMS : leverage AWS key Management Service to manage encryption keys

  - encryption using keys handled & managed by KMS

  - KMS Advantages : user control + audit trail

  - Object is encrypted server side

  - Must set header : "x-amz-server-side-encryption":"aws:kms"

3) SSE-C : when you want to manage your own encryption keys

  - server-side encryption using data keys fully manged by the customer outside of AWS

  - Amazon S3 does not store the encryption key you provide

  - HTTPS must be used

  - Encryption key must provided in HTTP headers, for every HTTP request made

4) Client Side Encryption

  - Client library such as the Amazone S3 Encryption Client

  - Clients must encrypt data temselves before sending to S3

  - Clients must encrypt data temselves when retrieving from S3

  - Customer fully manages the keys and encryption cycle

 

# Encryption in transit (SSL/TLS)

- Amazon S3 exposes :

  HTTP endpoint : non encrypted

  HTTPS endpoint : encryption in flight

- You are free to use the endpoint you want, but HTTPS is recommended

- Most clients would use the HTTPS endpoint by default

* HTTPS is mandatory for SSE-C

 

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 7-3. S3 Websites : CORS, Eventual Consistency, Strong Consistency  (0) 2021.04.01
[AWS] 7-2. S3 Security  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23

EC2 Instance :

- Use a Golden AMI : Install your applications, OS dependencies etc. beforehand and launch your EC2 instance from the Golden AMI

- Bootstrap using User Data : For dynamic configuration, use User Data scripts

- Hybrid : mix Golden AMI and User Data (Elastic Beanstalk)

RDS Databases :

- Restore from a snapshot : the database will have schemas and data ready

EBS Volumes :

- Restore from a snapshot : the disk will already be formatted and have data

 

 

# Developer problems on AWS

- Managing infrastructure

- Deploying Code

- Configuring all the databases, load balancers, etc

- Scaling concerns

 

- Most web apps have the same architecture (ALB + ASG)

- All the developers wait is for their code to run

- Possibly, consistenly across different applications and environments

 

 

 

[ AWS ElasticBeanStalk ]

- ElasticBeanStalk is a developer centric view of deploying an application on AWS

- It uses all the component's we've seen before : EC2, ASG, ELB, RDS, etc...

- But it's all in one view that's easy to make sense of

- We still have full control over the configuration

- BeanStalk is free but you pay for the underlying instances

- Managed service

  -- Instance configuration/OS is handled by beanstalk

  -- Deployment strategy is configurable but performed by ElasticBeanStalk

- Just the application code is the responsibility of the developer

- Three architecture models :

  1) Single Instance deployment : good for dev

  2) LB+ASG : great for production or pre-production web applications

  3) ASG only : great for non-web apps in production (workers, etc..)

- ElasticBeanStalk has three components

  1) Application

  2) Application version : each deployment gets assigned a version

  3) Environment name (dev/test/prod..) : free naming

- You deploy application versions to environments and can promote application versions to the next environment

- Rollback feature to previous application version

- Full control over lifecycle of environments

Support for many platforms : Go, java, java with tomcat, node.js phpm python, ruby, single/multi container docker, preconfigured docker... (If not supported, you can write your custom platform)

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 7-2. S3 Security  (0) 2021.03.29
[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23
[AWS] 4-2. Aurora  (0) 2021.03.23

[ Route53 ]

- Route53 is a Managed DNS (Domain Name System)

- DNS is a collection of rules and records which helps clients understand how to reach a server through its domain name

* You pay 0.5$ per month per hosted zone

- In AWS, the most common records are :

  1) A : host name to IPv4

  2) AAAA : hostname to IPv6

  3) CNAME : hostname to hostname

  4) Alias : hostname to AWS resource

- Route 53 can use :

  public domain names you own (or buy)

  private domain names that can be resolved by your instances in your VPCs.

 

- Route 53 has advanced features such as :

  Load balancing (through DNS - also called client load balancing)

  Health checks (although limited..)

  Routing policy : simple, failover, geolocation, latency, weighted, multi value 

 

[ DNS Records TTL (Time to Live) ]

Web browser cache 가 살아있는 시간.

Web browser 는 Route 53 에 DNS 요청을 하고 도메인에 해당하는 IP와 함께 TTL을 받아 TTL 시간동안 DNS 를 캐싱한다. TTL이 다 지날 경우 다시 DNS 요청을 하여 IP를 다시 캐싱한다.

TTL 시간이 길 수록 DNS 트래픽은 줄어들고 웹브라우저가 옛날 아이피로 요청할 가능성이 높아진다. (DNS 의 A record설정을 수정할 경우)

TTL 값은 필수 DNS record

- High TTL (eg. 24 hour)

  Less traffic on DNS, Possibly outdated records

- Low TTL (eg. 60 seconds)

  More traffic on DNS, Records are outdated for less time, Easy to change records

* TTL is mandatory for each DNS record

 

[ CNAME vs Alias ]

CNAME 은 도메인(호스트) 호출시 다른 호스트명으로 리다이렉트, 유료, 도메인은 사용 불가, 유료

Alias 는 도메인(호스트) 호출시 AWS 리소스로 리다이렉트, root 도메인도 사용 가능, 무료

CNAME :

- Points a hostname to any other hostname (app.mydomain.com > blabla.anything.com)

- only for Non Root domain (eg. something.mydomain.com)

- not free

Alias :

- Points a hostname to an AWS Resource (app.mydomain.com > blabla.amazonaws.com)

- Works for Root domain and non root domain (eg. mydomain.com)

- Free of charge

- Native health check

 

[ Simple Routing Policy ]

1개의 CNAME/Alias 에 1개의 A record 지정한 1:1 관계. health check 사용 불가

1개의 CNAME/Alias 에 2개 이상의 A record 가 지정되있을 경우 client 가 랜덤으로 IP 선택

- Use when you need to redirect to a single resource

- You can't attach health checks to simple routing policy

* If multiple values are returned, a random one is chosen by the client

 (=client side load balancing)

 

[ Weighted Routing Policy ]

A record 마다 가중치를 다르게 주어 트래픽을 분산하는 정책

- Control the % of the requests that go to specific endpoint

- Helpful to test 1% of traffic on new app version for example

- Helpful to split traffic between two regions

- Can be associated with Health Checks

 

[ Latency Routing Policy ]

최저응답시간을 갖는 A record 로 리다이렉트 시키는 정책

(eg. 한국/미국/영국 region 의 인스턴스를 latency routing policy 를 적용하여 하나의 CNAME 의 A record 로 지정한 후 서울에서 DNS 요청시 한국 A record 의 인스턴스가 응답함)

- Redirect to the server that has the least latency close to us

- Super helpful when latency of users is a priority

- Latency is evaluated in terms of user(사용자 측면에서) to designated(지정된) AWS Region (유저마다 최저응답시간을 갖는 호스트로 라우팅됨)

- Germany may be directed to the US (if that's the lowest latency)

 

[ Health Checks ]

설정한 Check Interval 의 수만큼 연속으로 instance (IP) 에 ping 을 날려 instance 의 상태를 파악

- Have 3 (default value is 3) health checks failed => unhealthy

- After 3 (default value is 3) health checks passed => health

- Default Health Check Interval : 30s (can set to 10s - higher cost)

- About 15 health checkers will check the endpoint health

   => one request every 2 seconds on average

- Can have HTTP, TCP and HTTPS health checks (no SSL verification)

- Possibility of integrating the health check with CloudWatch 

* Health checks can be linked to Route53 DNS queries

 

[ Failover Routing Policy ]

1. Web browser 가 Route53 에 DNS 요청

2. Route 53 은 primary instance에 Health check

3. Primary instance 가 unhealthy 할 경우 secondary instance (DR(disaster recovery)) 에 요청

 

[ Geolocation Routing Policy ]

지역설정을 하여 해당 지역에서 오는 request 는 특정 A record 의 instance 가 처리

지정하지 않은 지역으로부터 요청이 올 경우 default 로 설정해놓은 A record 의 instance 가 처리

- Different from Latency based

- This is routing based on user location

- Here we specify : traffic from the UK should go to this specific IP

* Should create a "default" policy (in case there's no match on location)

 

[ Multi Value Routing Policy (=client side load balancing) ]

동일한 DNS 에 A record 를 최대 8개 까지 설정

client 에서 Route 53 에 DNS 요청시 healthy 한 instance 만 response

client 는 healthy 한 instance 중에서 하나의 instance에 random 하게 요청

- Use when routing traffic to multiple resources

- Want to associate a Route 53 health checks with records

- Up to 8 healthy records are returned for each Multi Value query

* Multi Value is not a substitute for having an ELB

 

[ # Hands-on : Route53 에 record, health check 설정 방법 ]

1. health check 생성 (instance IP or Domain 입력)

2. Route 53 의 record 생성

- Name : sample.testaws.com  (sample 이 Record set 의 name 이자 domain 이 됨)

- Type : A record ( IPv4 )

- TTL : IP 유효시간 설정

- Value : Type의 value 로, A record 선택시 인스턴스의 IPv4 입력

- Routing Policy : simple(단일 A record), failover, geolocation, latency, weighted, multi value.. 선택

3. 선택한 record 의 Routing Policy 에 따라 Associate with Health check 옵션 Yes 로 선택 및 Health Check 선택

: 위와 같이 설정시 client 는 DNS 요청을 Route53 에 하며 health check 를 통해 주기적으로 ping 을 하여 IP의 instance 가 healthy/unhealty 한지 파악. 인스턴스의 상태에 따라 선택한 Routing Policy 에 따라 다르게 동작

 

 

[ Route 53 as a Registrar ]

Rregistrar 는 예약된 Internet domain names을 관리하는 조직

- A domain name registrar is a organization that manages the reservation of Internet domain names

(eg. Google Domains, and also Route53(AWS))

* Domain Registrar != DNS (but each domain registrar usually comes with some DNS features)

 

# 3rd Party Registrar with AWS Route 53

3rd Party 에서 AWS Route53 의 DNS 서버 사용하기

1) 3rd Party (ex: Google) 가 제공하는 Name Server 대신 Custom Name Server 를 사용하도록 설정

2) 이때 Custom Name Server 는 Route53 에서 생성한 Hosted Zone 의 Name Server 로 설정 (Hosted Zone 생성 후 Hosted Zone 클릭시 노출되는 Details 정보 안에 Name Server 정보가 있음)

- If you buy your domain on 3rd party website, you can still use Route53

1) Create a Hosted Zone in Route53

2) Update NS Records on 3rd party website to use Route53 name servers

 

 

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23
[AWS] 4-2. Aurora  (0) 2021.03.23
[AWS] 4-1. RDS, Read Replicas, DR  (0) 2021.03.22

+ Recent posts