[ RDS : Relational Database Service ]

AWS 에서 관리되는 SQL DB service

- It's a managed DB service for DB use SQL as a query language.

- It allows you to create databases in the cloud that are managed by AWS  

  MySQL, MariaDB, Aurora(AWS), Oracle...

 

# Advantage over using RDS versus deploying DB on EC2

DB를 EC2에서 직접 띄우지 않고 RDS를 사용했을 때의 이점

failover를 위한 replica 설정, 읽기 성능 향상을 위한 read replica 설정, 백업 및 특정 시점으로 복원 가능

RDS is a managed service

- Automated provisioning((대비)실시간으로 자원 할당하여 사용), OS patching

- Continuous backups and restore to specific timestamp (Point in Time Restore)

- Monitoring dashboards

- Read replicas for improved read performance

- Multi AZ setup for DR (Disaster Recovery)

- Maintenance windows for upgrades

- Scaling capability (vertical and horizontal)

- Storage backed by EBS (GP2 or IO1)

* But you can't SSH into your instances

 

# RDS Backups

자동 백업이 가능. Snapshot 사용 가능

1) Backups are automatically enabled in RDS

2) Automated backups :

  - Daily full backup of the database (during the maintenance window)

  - Transaction logs are backed-up by RDS every 5 minutes

     => ability to restore to any point in time (from oldest backup to 5 minutes ago) 

  - 7 days retention(보유) (can be increased to 35 days)

3) DB Snapshots :

- Manually triggered by the user

- Retention of backup for as long as you want

 

 

[ RDS - Read Replicas for read scalability ]

5개 까지 사용 가능, AZ/Region 상관없이 사용 가능, replica 가 master 가 될 수 있음

Async 비동기 방식

- Up to 5 Read Replicas

- Within AZ, Cross AZ or Cross Region

- Replication is Async, so reads are eventually consistent

- Replicas can be promoted to their own DB

- Applications must update the connection string to leverage(사용) read replicas

* Multi AZ keeps the same connection string regardless of which database is up. Read Replicas imply we need to reference them individually in our application as each read replica will have its own DNS name

Multi AZ 는 커넥션 스트링을 항상 같게 유지하지만 Read Replicas는 각각 자신만의 DNS 를 가지게 되므로 Read Replicas 에 대한 커넥션 스트링 앱에서 바꿔야함

 

# Read Replicas Use cases

분석 프로그램을 돌리기 위해 RDS read replica 를 생성하여 read replica 을 바라보게 설정.

원래의 app 엔 영향을 미치지 않음.

1) You have a production database that is taking on normal load

2) You want to run a reporting application to run some analytics

3) You create a Read Replica to run the new workload there

4) The production application is unaffected

5) Read replicas are used for SELECT only kind of state ments (NOT I/U/D)

 

# Read Replicas Network Cost

동일한 AZ내의 Replicas 에선 사용요금이 발생하지 않음.

In AWS there's a network cost when data goes from one AZ to another

To reduce the cost, you can have your Read Replicas in the same AZ (Free)

 

 

# RDS Multi AZ (Disaster Recovery)

싱크 복제

읽거나 쓰기 용도가 아닌 백업용도 (스케일링 용도 아님)

모든 데이터가 복제 slave 에도 쓰이게 됨.

마스터가 죽으면 slave가 마스터가 되어 failover.

복수개의 AZ 에서 세팅될 수 있음

- Sync replication

- One DNS name - automatic app failover to standby

- Increase availability

- Failover in case of loss of AZ, loss of network, instance or storage failure

- No manual intervention(끼어듬) in apps

- Not used for scaling

* The Read Replicas be setup as Multi AZ for Disaster Recovery(DR)***

 

[ RDS Security : 1. Encryption ]

RDS 보안 : 암호화

1. At rest encryption

KMS 를 사용하여 암호화 가능

런칭시 암호화 정의되어있어야함.

마스터가 암호화되어있지 않을 경우 Read Replica 또한 암호화 될 수 없음

- Possibility to encrypt the master & read replicas with AWS KMS - AES-256 encryption

- Encryption has to be defined at launch time

- If the master is not encrypted, the read replicas cannot be encrypted

- TDE(Transparent Data Encryption) available for Oracle and MS SQL Server

 

2. In flight encryption

- SSL certificates to encrypt data to RDS in flight

- Provide SSL options with trust certificate when connecting to database

- To enforce SSL:

  -- PostgreSQL : rds.force_ssl=1 in the AWS RDS Console (Parameter Groups)

  -- MySQL : GRANT USAGE ON *.* TO 'mysqluser'@'%' REQUIRE SSL; (Within the DB)

 

# RDS Encryption Operations

Encrypting RDS backups

- Snapshots of un-encrypted RDS databases are un-encrypted

- Snapshots of encrypted RDS databases are encrypted

- Can copy a snapshot into an encrypted one

 

To encrypt an un-encrypted RDS database :

1) Create a snapshot of the un-encrypted database

2) Copy the snapshot and enable encryption for the snapshot

3) Restore the database from the encrypted snapshot

4) Migrate applications to the new database, and delete the old database

: unencrypted DB => snapshot => copy snapshot as encrypted => create DB from snapshot

 

[ RDS Security : 2. Network & IAM ]

Network Security

- RDS databases are usually deployed within a private subnet, not in a public one

- RDS security works by leveraging security groups (the same concept as for EC2 instances) - it controls which IP/security group can communicate with RDS

 

Access Management

- IAM policies help control who can manage AWS RDS (through the RDS API)

- Traditional Username and Password can be used to login into the database

- IAM-based authentication can be used to login into RDS MySQL & PostgreSQL

 

# RDS - IAM Authentication

- IAM database authentication works with MySQL and PostgreSQL

- You don't need a password, just an authentication token obtained through IAM & RDS API calls

- Auth token has a lifetime of 15 minutes

* Benefits :

  - Network in/out must be encrypted using SSL

  - IAM to centrally manage users instead of DB

  - Can leverage IAM Roles and EC2 Instance profiles for easy integration

 

참고:

https://wbluke.tistory.com/58

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23
[AWS] 4-2. Aurora  (0) 2021.03.23
[AWS] 3-2. EBS Snapshots, EFS, Instance Storage  (0) 2021.03.20
[AWS] 3-1. EBS  (0) 2021.03.19
[AWS] 2-3. ASG  (0) 2021.03.18

[ EBS Snapshots *** ]

Snapshot 생성시 데이터 백업과 유사

Snapshot 은 AZ / Region 에 제약이 없음

Snapshot 생성시 IO 를 사용하므로 앱에 부하가 있을 땐 생성하면 안됨

Snapshot 은 S3 에 저장됨

Amazon Data Lifecycle Manager을 사용하여 Snapshot 을 주기적으로 생성 할 수 있음(Scheduling)

- Incremental - only backup changed blocks

- EBS backups use IO and you shouldn't run then while your application is handling a lot of traffic

- Snapshots will be stored in S3 (but you won't directly see them)

- Not necessary to detach volume to do snapshot, but recommended

- Max 100000 snapshots

- can copy snapshots across AZ or Region

- Can make AMI from Snapshot

- EBS volumes restored by snapshots need to be pre-warmed (using fio or dd command to read the entire volume)

- Snapshots can be automated using Amazon Data Lifecycle Manager

 

[ EBS Migration ]

Snapshot 을 생성하여 Snapshot 을 통해 volume 을 생성하는 방식으로 AZ 제약을 해소할 수 있음

- EBS volumes are only locked to a specific AZ

- To migrate it to a different AZ (or region) :

  1) Snapshot the volume

  2) (optional) Copy the volume to a different region

  3) Create a volume from the snapshot in the AZ of your choice

[ EBS Encryption ]

Snapshot을 암호화 한 후 volume 을 생성 할 경우 volume 도 암호화 됨

- When you create an encrypcted EBS volume, you get the following :

  -- Data at rest is encrypted inside the volume

  -- All the data in flight moving between the instance and the volume is encrypted

  -- All snapshots are encrypted

  -- All volumes created from the snapshot

- Encryption and decryption are handled transparently (you have nothing to do)

- Encryption has a minimal impact on latency

- EBS Encryption leverages keys from KMS (AES-256)

- Copying an unencrypted snapshot allows encryption

- Snapshots of encrypted volumes are encrypted

 

[ # Encryption : encrypt an unencrypted EBS volume ]

암호화 되어있지 않은 EBS 를 암호화 하는 방법

- Create an EBS snapshot of the volume

- Encrypt the EBS snapshot (using copy)

- Create new EBS volume from the snapshot (the volume will also be encrypted)

- Now you can attach the encrypted volume to the original instance

 

[ EBS vs Instance Store ]

Instance Store (내장 물리디스크)가 EBS 에 비해 IO 성능이 뛰어나며 buffer/cache 등의 사용엔 유리할 수 있으나 인스턴스를 stop/termination 할 경우 사라짐. 

- Some instance do not come with Root EBS volumes

- Instead, they come with "Instance store" (= ephemeral(단명하는) storage)

- Instance store is physically attached to the machine (EBS is a network drive)

* Pros of Instance Store :

  - Better I/O performance

  - Good for buffer/cache/scratch data temporary content

  - Data survives reboots

* Cons :

  - On stop or termination, the instance store is lost

* Local EC2 Instance Store

  - Physical disk attached to the physical server where your EC2 is

  - Very High IOPS (because physical)

  - Disks up to 7.5 TB (can change over time), stripped to reach 30 TB (can change over time)

  - Block Storage (just like EBS)

  - Cannot be increased in size

  - Risk of data loss if hardware fails

 

[ EBS RAID configurations ]

# RAID 0 : extension (스트라이핑)

디스크 공간 확장

- Combining 2 or more volumes and getting the total disk space and I/O

- one disk fails, all the data is failed

- An application that needs a lot of IOPS and doesn't need fault-tolerance

- A database that has replication already built-in

- Using this we can have a very big disk with a lot of IOPS

# RAID 1 : instance fault tolerance, mirroring (미러링 설정)

디스크 미러링으로 안정성 향상

- Mirroring a volume to another

- If one disk fails, our logical volume is still working

- Send the data to two EBS volume at the same time

- Application that need increase volume fault tolerance

- Application where you need to service disks

 

 

[ EFS - Elastic File System ]

EBS 와 달리 multi AZ 에서 사용 가능

고성능, 고비용

Linux AMI 에서 사용이 가능 (Windows X)

- Managed NFS (network file system) that can be mounted on many EC2

- EFS works with EC2 instances in multi-AZ

- Highly available, scalable, expensive (gp2 3배), pay per use

- Use cases: content management, web serving, data sharing, Wordpress

- Uses NFSv4.1 protocol

- Uses security group to control access to EFS

- Compatible with Linux based AMI (not Windows)

- Encryption at rest using KMS

- POSIX file system(Linux) that has a standard file API

- File system scales automatically, pay-per-use, no capacity planning

 

# Performance & Storage Classes

EFS 는 자주 access 하지 않는 파일을 주기적으로 EFS-IA (Infrequent access)로 이동시켜 비용 절감을 할 수 있다

1) EFS Scale

  - 1000s of concurrent NFS clients, 10 GB+/s throughput

  - Grow to Petabyte-scale network file system, automatically

2) Performance mode (set at EFS creation time)

  - General purpose (default): latency-sensitive use cases (web server, CMS, etc..)

  - Max I/O - higher latency, throughput, highly parallel (big data, media processing)

3) Storage Tiers (lifecycle management feature - move file after N days) ***

  - Standard : for frequently accessed files

  - Infrequent access(EFS-IA) : cost to retrieve files, lower price to store

 

 

[ EBS (Elastic Block System) vs EFS (Elastic File System)]

EBS 와 EFS 의 차이

1. EBS volumes

한번에 하나의 EC2 인스턴스에 마운트 가능

multi AZ 불가

AZ 간 마이그레이션을 원할 경우 snapshot 생성을 통해 가능

 - can be attached to only one instance at a time

 - are locked at the AZ level

 - IO1 : can increase IO independently

 - GP2 : IO increases if the disk size increases

To migrate an EBS volume across AZ

 - Take a snapshot

 - Restore the snapshot to another AZ

 - EBS backups use IO and you shouldn't run them while your application is handling a lot of traffic

Root EBS volumes instances get terminated by default if the EC2 instance gets terminated. (you can disable that)

 

2. EFS

100여개의 EC2 인스턴스에 마운트 가능

multi AZ 가능

Linux 인스턴스에만 사용 가능

EBS 보다 고성능/고비용

- Mounting 100s of instances across AZ

- EFS share website files (WordPress)

- Only for Linux Instances (POSIX)

- EFS has a higher price point than EBS

- Can leverage EFS-IA for cost saving

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 4-2. Aurora  (0) 2021.03.23
[AWS] 4-1. RDS, Read Replicas, DR  (0) 2021.03.22
[AWS] 3-1. EBS  (0) 2021.03.19
[AWS] 2-3. ASG  (0) 2021.03.18
[AWS] 2-2. LB types (CLB, ALB, NLB), Stickiness, SSL/SNI, ELB  (0) 2021.03.16

[ 1. EBS : Elastic Block Store ]

EC2 는 제거될 때 root volume 이 함께 제거된다.

EBS 는 명칭만 다를뿐 NAS 와 유사

- An EC2 machine loses its root volume (main drive) when it is manually terminated.

- Unexpected terminations might happen from time to time (AWS would email you)

- Sometimes, you need a way to store your instance data somewhere

- An EBS Volume is a network drive you can attach to your instances while they run

- It allows your instances to persist data

 

[ EBS Volume ]

내장 물리디스크가 아닌 네트워크 드라이브

서버 러닝중에도 제거/추가가 가능

AZ간 이동시 snapshot 생성을 통해 이동이 가능

- It's a network drive (not a physical drive)

  -- It uses the network to communicate the instance, which means there might be a bit of latency

  -- It can be detached from an EC2 instance and attached to another one quickly

- It's locked to an AZ

  -- To move a volume across, you first need to snapshot it

- Have a provisioned capacity (size in GBs, and IOPS(I/O Ops Per Sec))

  -- You get billed for all the provisioned capacity

 

[ EBS Volume Types ]

EBS Volume 은 세가지 유형이 존재

- EBS Volumes are characterized in Size/Throughput/IOPS (I/O Ops Per Sec)

- Only GP2 and IO1 can be used as boot volumes

1) IO1 (SSD)

고성능 SSD volume

Highest-performance SSD volume for mission-critical low-latency or high-throughput workloads

- Critical business applications that require sustained IOPS performance, or more than 16000 IOPS per volume (GP2 limit)

- for Large database workloads (eg. MongoDB, Oracle, MySql)

  * GB range : 4GB ~ 16TB

  * MIN IOPS : 100

  * MAX IOPS : 64000 (for Nitro instances) or 32000 (other instances)

  * GB per IOPS : 50 IOPS per GB

 

2) GP2 (SSD)

일반 용도의 SSD Volume

General Purpose SSD volume that balances price and performance for a wide variety of workloads

- Recommended for most workloads

- System boot volumes

- Virtual desktops

- Low-latency interactive apps

- Development and test environments

  * GB range : 1GB ~ 16TB (Small GP2 volumes can burst IOPS to 3000)

  * MAX IOPS : 16000 

  * GB per IOPS : 3IOPS per GB (means at 5334GB are at the max IOPS)

 

3) ST1 (HDD)

저가형 HDD Volume

Low cost HDD volume designed for frequently accessed, throughput-intensive workloads

- Streaming workloads requiring consistent, fast throughput at a low price

- Big Data, Data warehouses, Log processing

- Apache Kafka

- Cannot be a boot volume

  * GB range : 500GB ~ 16TB

  * MIN IOPS : 500

  * MAX throughput : 500MB/s (can burst)

 

4) SC1 (HDD)

최저가형 HDD Volume

Lowest cost HDD volume designed for less frequently accessed workloads

- Throughput-oriented storage for large volumes of data that is infrequently accessed

- Scenarios where the lowest storage cost is important

- Cannot be a boot volume

  * GB range : 500GB ~ 16TB

  * MIN IOPS : 250

  * MAX throughput : 250MB/s (can burst)

 

[ # Hands-On ]

1. How to mount

1) EBS volume 생성 : EC2 Instance 생성시 Step 4 Add Storage 에서 EBS 설정이 가능

2) 마운트 상태 확인

> lsblk 

3) 드라이브에 파일시스템 존재여부 확인

> sudo file -s /dev/{drivename}

4) 파일시스템 생성

> sudo mkfs -t ext4 /dev/{drivename}

5) 경로 생성

> sudo mkdir /data

6) 마운트 시키기

> sudo mount /dev/xvdb /data

7) 마운트 확인

> lsblk

8) 마운트 된 경로에 마운트 테스트용 파일 생성

> sudo touch /data/hello.txt

9) fstab 수정

> sudo nano /etc/fstab 

/dev/{drivename} /data ext4 defaults,nofail 0 2     (현재 마운트 정보 입력 참고)

* fstab : 파일시스템 테이블. 시스템이 리부팅되어도 이곳에 마운트 정보가 남아있어 자동으로 부팅시 마운트됨. 

10) 파일시스템 확인

> sudo file -s /dev/{drivename}

 

2. unmount

> sudo umount /data

 

3. fstab 이 설정 된 이후 mount

> sudo mount -a

반응형

[ ASG : Auto Scaling Groups ]

로드밸런서의 인스턴스 갯수의 최소 최대 값을 정해놓고 사용. 설정값보다 수가 적은 경우 인스턴스 갯수를 확장(scale out), 적을 경우 축소(scale in)

In the cloud you can create and get rid of servers very quickly

The goal of an ASG is to :

1) Scale out (add EC2 instances) to match an increased load

2) Scale in (remove EC2 instances) to match a decreased load

3) Ensure we have a minimum and a maximum number of machines running

4) Automatically Register new instances to a load balancer

 

 

# ASGs have the follong attributes

- A launch configuration

  1) AMI + Instance Type

  2) EC2 User Data

  3) EBS Volumes

  4) Security Groups

  5) SSH Key Pair

- Min Size/Max Size/Initial Capacity

- Network + Subnets Information

- Load Balancer Information

 

# Auto Scaling Alarms

CloudWatch alarm 을 이용하여 ASG 를 컨트롤 (Alarm 은 ASG 를 컨트롤 하기 위한 트리거 역할)

- It is possible to scale an ASG based on CloudWatch alarms

- An Alarm monitors a metric (such as Average CPU)

- Metrics are computed(집계/산정) for the overall ASG instances

- Based on the alarm :

   -- We can create scale-out policies (increase the number of instances)

   -- We can create scale-in policies (decrease the number of instances)

 

[ Auto Scaling New Rules ]

직접적으로 EC2 상태(CPU사용률 등)를 기반으로 확장 기준을 정의할 수 있음

- It is now possible to define "better" auto scaling rules that are directly managed by EC2

  1) Target Average CPU Usage

  2) Number of requests on the ELB per instance

  3) Average Network In

  4) Average Network Out

- These rules are easier to set up and can make more sense

 

[ Auto Scaling Custom Metric ]

사용자 정의 기준에 의해 확장 룰을 정할 수 있음.

We can auto scale based on a custom metric (eg. number of connected users)

1) Send custom metric from application on EC2 to CloudWatch

2) Create CloudWatch alarm to react to low/high values

3) Use the CloudWatch alarm as the scaling policy for ASG

 

 

# ASG Brain Dump

- Scaling policies can be on CPU, Network... and can even be on custom metrics or based on a schedule (if you know your visitors patterns)

- ASGs use Launch configurations or Launch Templates(newer)

- To update an ASG, you must provide a new launch configuration/launch template

- IAM roles attached to an ASG will get assigned to EC2 instances

- ASG are free. You pay for the underlying resources being launched

- Having instances under an ASG means that if they get terminated for whatever reason, the ASG will automatically create new ones as a replacement. (Extra safety)

- ASG can terminate instances marked as unhealthy by an LB (and hence replace them)

 

 

[ Scaling Policies of ASG ]

1. Target Tracking Scaling (타겟 기준점 정함)

  - Most simple and easy to set-up

  eg. I want the average ASG CPU to stay at around 40%

2. Simple/Step Scaling (상한 하한 기준 정함)

  - When a CloudWatch alarm is triggered (example CPU > 70%), then add 2 units

  - When a CloudWatch alarm is triggered (example CPU < 30%), then remove 1 unit

3. Scheduled Actions (특정 스케쥴에 확장/축소)

   - Anticipate a scaling based on known usage patterns

   - eg. increase the min capacity to 10 at 5 PM on Fridays

 

# Simple Scaling vs Step Scaling :

보통 Step Scaling 사용을 추천

- Simple Scaling 은 Scaling 이 완료될 때 까지, 그리고 cooldown 시간을 기다린 후 Alarm 에 응답.

- Step Scaling 은 Scaling 이 진행 중일 때도 Alarm 에 응답. 경보 메시지 수신 가능

참고

 

[ Scaling Cooldowns *** ]

직전의 ASG에 의한 인스턴스 확장/축소가 실제로 효과를 내기 전에 추가적인 확장/축소가 일어나지 않도록 일정 시간 딜레이를 주는 옵션.

쿨다운 시간이 너무 길 경우 축소(scale in)시 바로 꺼져도 되는 인스턴스가 살아있어 무의미한 인스턴스 사용요금이 발생 할 수 있음. 이 경우 쿨다운 시간을 줄여 비용절감을 할 수 있음.

짧은 시간 동안 여러번의 확장/축소가 이뤄져야 할 경우 쿨다운 시간을 적게 주어야 함.

- The cooldown period helps to ensure that your Auto Scaling group doesn't launch or terminate additional instances before the previous scaling activity takes effect.

- In addition to default cooldown for Auto Scaling group, we can create cooldowns that apply to a specific simple scaling policy

- A scaling-specific cooldown period overrides the default cooldown period.

- One common use for scaling-specific cooldowns is with a scale-in policy-a policy that terminates instances based on a specific criteria or metric. Because this policy terminates instances, Amazon EC2 Auto Scaling needs less time to determine whether to terminate additional instances.

- If the default cooldown period of 300 secs is too long - you can reduce costs by applying a scaling-specific cooldown period of 180 secs to the scale-in policy.

- If your application is scaling up and down multiple times each hour, modify the ASG cool-down timers and the CloudWatch Alarm Period that triggers the scale in 

 

[ ASG for Solutions Architects *** ]

ASG 축소시 인스턴스 제거 정책. 인스턴스가 가장 많은 AZ 를 찾은 후, 그 안에서 가장 오래된 런치 설정의 인스턴스를 찾아 제거

1. ASG Default Termination Policy :

   1) Find the AZ which has the most number of instances

   2) If there are multiple instances in the AZ to choose from, delete the one with the oldest launch configuration

* ASG tries the balance the number of instances across AZ by default.

 

2. Lifecycle Hooks

- You have the ability to perform extra steps before the instance goes in service (Pending state)

- You have the ability to perform some actions before the instance is terminated (terminating state)

  eg. logging 

 

3. Launch Template vs Launch Configuration

* Both :

ID of AMI, the instance type, the instance type, a key pair, security groups, and the other parameters that you use to launch EC2 instances (tags, EC2 user-data..)

* Differences :

1) Launch Configuration (legacy) :

   - Must be re-created every time

2) Luanch Template (newer) :

   - Can have multiple versions

   - Create parameters subsets (partial configuration for re-use and inheritance)

   - Provision using both On-Demand and Spot instances (or a mix)

   - Can use T2 unlimited burst feature

   - Recommended by AWS going forward

 

 

반응형

[ Types of load balancer on AWS ]

- AWS has 3 kinds of managed Load Balancers

1) Classic Load Balancer(v1 - old generation) - 2009

  : HTTP, HTTPS, TCP

2) Application Load Balancer(v2 - new generation) - 2016

  : HTTP, HTTPS, WebSocket

3) Network Load Balancer(v2 - new generation) - 2017

  : TCP, TLS(secure TCP) & UDP

- Overall, it is recommended to use the newer/v2 generation load balancers as they provide more features

- You can setup internal(private) or external(public) ELBs

 

[ Load Balancer Security Groups ] 

1) Load Balancer Security Group 설정

  HTTP 80 port 시큐리티 그룹 생성

2) Application(EC2 Instance) Security Group : Allow traffic only from Load Balancer

  HTTP 80 port , Source로 Load Balancer 의 Security Group 설정

 

[ Load Balancer Good to know ]

- LBs can scale(확장) but not instantaneously(즉시) - contact AWS for a "warm-up"

- Troubleshooting

  -- 4xx errors are client induced(유발) erros

  -- 5xx errors are application induced erros

  -- Load Balancer ERRORs 503 means at capacity or no registered target

  -- If the LB can't connect to your application, check your security groups

- Monitoring

  -- ELB access logs will log all access requests(so you can debug per request)

  -- CloudWatch Metrics will give you aggregate(집계) statistics (eg: connections count)

 

# 1. ALB : Application Load Balancer (v2)

- Application load balancers is Layer7 (HTTP)

- Load balancing to multiple HTTP applications across machines (target groups)

- Load balancing to multiple applications on the same machine (ex: containers)

- Support for HTTP/2 and WebSocket

- Support redirects (from HTTP to HTTPS for example)

- Routing tables to different target groups:

  1) Routing based on path in URL (example.com/users , example.com/posts)

  2) Routing based on hostname in URL (one.example & other.example.com)

  3) Routing based on Query String, Headers (example.com/users?id=123&order=false)

- ALB are a great fit for micro services & container-based application (eg. Docker & Amazon ECS)

- Has a port mapping feature to redirect to a dynamic port in ECS

 

# Target Groups

- EC2 instances (can be managed by an Auto Scaling Group) - HTTP

- ECS tasks (managed by ECS itself) - HTTP

- Lambda functions - HTTP request is translated into a JSON event

- IP Address - must be private IPs

 

- ALB can route to multiple target groups

- Health checks are at the target group level

 

# Good to know

- Fixed hostname

- The application servers don't see the IP for the client directly

  -- The true IP of the client is inserted in the header X-Forwared-For

  -- We can also get Port (X-Forwarded-Port) and proto (X-Forwarded-Proto)

 

 

# 2. NLB : Network Load Balancer (v2)

- Network load Balancers (Layer 4) allow to :

  1) Forward TCP & UDP traffic to your instances

  2) Handle millions of request per seconds

  3) Less latency ~ 100ms (vs 400 ms for ALB)

- NLB has one static IP per AZ, and supports assigning Elastic IP  (helpful for whitelisting specific IP)

- NLB are used for extreme performance, TCP or UDP traffic

- Not included in the AWS free tier

- Only NLB provides Elastic IP (CLB/ALB doesn't provide)

 

# Hands-on : LB

1. ALB / NLB / CLB(Classic Load Balancer) 선택

2. Step1. Configure Load Balancer 

  Listeners (LB Protocol) 선택 eg. HTTP/HTTPS/UDP/TCP

3. Step1. AZ 선택 ( can be multiple )

4. Step2. Security Group 설정

5. Step3. Target Group 설정 : Target type(Instance or IP), Protocol, Port, Health checks(timeout sec, Interval)

6. Step4. Register Targets : Add instances 

 

 

[ Load Balancer Stickiness ]

일정 시간동안 동일한 클라이언트의 요청은 동일한 서버에 의해 처리 되도록 

- It is possible to implement stickiness so that the same client is always redirected to the same instance behind a load balancer

- This works for Classic Load Balancers & Application Load Balancers

- The cookie used for stickiness has an expiration date you control

- Use case : make sure the user doesn't lose his session data

- Enabling stickiness may bring imbalance to the load over the backend EC2 instances

* same request originating from the same client, to go to the same target

* stickiness 설정은 Target Group 에 있음. duration 설정시 cookie expiration 까지 동일한 인스턴스에 request 하게 됨.

 

[ Cross-Zone Load Balancing ]

With Cross Zone Load Balancing : each load balancer instance distributes evenly across all registered instances in all AZ

Without Cross Zone Load Balancing : each load balancer node distributes requests evenly across the registered instances in its Availability Zone only

[ Cross-Zone LB charge ]

1) CLB (Classic Load Balancer)

- Disabled by default

- No charges for inter AZ data if enabled

2) ALB (Application Load Balancer)

- Always on (can't be disabled)

- No charges for inter AZ data

3) Network Load Balancer

- Disabled by default

- You pay charges for inter AZ data if enabled

 

[ ELB (Elastic Load Balancers) - SSL Certificates ]

1) CLB 

- Support only one SSL certificate

- Must user multiple CLB for multiple hostname with multiple SSL certificates

2) ALB

- Supports multiple listeners with multiple SSL certificates

- Uses Server Name Indication (SNI) to make it work

3) NLB 

- Same as ALB

 

* SNI : Server Name Indication

- SNI solves the problem of loading multiple SSL certificates onto one web server (to serve multiple websites)

- It's a newer protocol, and requires the client to indicate the hostname of the target server in the initial SSL handshake

- The server will then find the correct certificate, or return the default one

 

 

[ ELB - Connection Draining ]

EC2 가 중단중 혹은 제거되고 있는 상태 (= DRAINING ) 일 때, 설정해놓은 Deregistration Delay 시간동안 response를 기다린 후, 응답이 오지 않을 경우 ELB 밑의 다른 EC2 에 request 

- Feature naming :

  -- CLB : Connection Draining

  -- ALB/NLB (which has Target Group) : Deregistration Delay

- Time to complete "in-flight requests" while the instance is de-registering or unhealty

- Stops sending new requests to the instance which is de-registering

- Between 1 to 3600 secs, default is 300 secs.

- Can be disabled (set value to 0)

- Set to a low value if your requests are short

 

 

반응형

[ Scalability & High Availability ]

확장성과 고가용성

- Scalability means that an application/system can handle greater loads by adapting.

- There are two kinds of scalability :

1) Vertical Scalability

2) Horizontal Scalability (=elasticity)

- Scalability is linked but different to High Availability

 

1. Vertical Scalability

인스턴스의 크기를 키우는 수직적 확장

- Vertical scalability means increasing the size of the instance

  eg. t2.micro -> t2.large

- Vertical scalability is very common for non distributed systems, such as database.

- RDS, ElastiCache are services that can scale vertically.

- There's usually a limit to how much you can vertically scale (hardware limit)

 

2. Horizontal Scalability

인스턴스/시스템의 수를 늘리는 수평적 확장

- Horizontal Scalability means increasing the number of instances/systems for your application

- Horizontal scaling implies distributed systems.

- This is very common for web applications/modern applications

- It's easy to horizontally scale thanks the cloud offerings such as Amazon EC2

 

[ High Availability ]

- High Availability usually goes hand in hand with horizontal scaling

- High availability means running your application/system in at least 2 data centers (= AZ)

- The goal of high availability is to survive a data center loss

- The high availability can be passive (for RDS Multi AZ for example)

- The high availability can be active (for horizontal scaling)

 

[ High Availability & Scalability For EC2 ]

- Vertical Scaling : Increase instance size (=scale up/down)

  From : t2.nano

  To : u-|2tb|.metal

- Horizontal Scaling : Increase number of instances (=scale out/in)

  1) Auto Scaling Group

  2) Load Balancer

- High Availability : Run instances for the same application across multi AZ

  1) Auto Scaling Group multi AZ

  2) Load Balancer multi AZ

 

 

# Why use a load balancer?

- Spread load across multiple downstream instances

- Expose a single point of access (DNS) to your application

- Seamlessly handle failures of downstream instances

- Do regular health checks to your instances

- Provide SSL termination (HTTPS) for your websites

- Enforce stickiness with cookies

- High availability across zones

- Separate public traffic from private traffic

 

# Why use an EC2 Load Balancer?

- An ELB(EC2 Load Balancer) is a managed load balancer

 1) AWS guarantees that it will be working

 2) AWS takes care of upgrades, maintenance, high availability

 3) AWS provides only a few configuration knobs

- It costs less to setup your own load balancer but it will be a lot more effort on your end.

 

 

[ Health Checks ]

- Health Checks are crucial for Load Balancers

- They enable the load balancer to know if instances it forwards traffic to are available to reply to requests

- The health check is done on a port and a route (/health is common)

- If the response is not 200(OK), then the instance is unhealthy

 

[ Types of load balancer on AWS ]

- AWS has 3 kinds of managed Load Balancers

1) Classic Load Balancer(v1 - old generation) - 2009

  : HTTP, HTTPS, TCP

2) Application Load Balancer(v2 - new generation) - 2016

  : HTTP, HTTPS, WebSocket

3) Network Load Balancer(v2 - new generation) - 2017

  : TCP, TLS(secure TCP) & UDP

- Overall, it is recommended to use the newer/v2 generation load balancers as they provide more features

- You can setup internal(private) or external(public) ELBs

 

[ Load Balancer Security Groups ] 

1) Load Balancer Security Group 설정

  HTTP 80 port 시큐리티 그룹 생성

2) Application(EC2 Instance) Security Group : Allow traffic only from Load Balancer

  HTTP 80 port , Source로 Load Balancer 의 Security Group 설정

 

[ Load Balancer Good to know ]

- LBs can scale(확장) but not instantaneously(즉시) - contact AWS for a "warm-up"

- Troubleshooting

  -- 4xx errors are client induced(유발) erros

  -- 5xx errors are application induced erros

  -- Load Balancer ERRORs 503 means at capacity or no registered target

  -- If the LB can't connect to your application, check your security groups

- Monitoring

  -- ELB access logs will log all access requests(so you can debug per request)

  -- CloudWatch Metrics will give you aggregate(집계) statistics (eg: connections count)

 

# 1. ALB : Application Load Balancer (v2)

- Application load balancers is Layer7 (HTTP)

- Load balancing to multiple HTTP applications across machines (target groups)

- Load balancing to multiple applications on the same machine (ex: containers)

- Support for HTTP/2 and WebSocket

- Support redirects (from HTTP to HTTPS for example)

- Routing tables to different target groups:

  1) Routing based on path in URL (example.com/users , example.com/posts)

  2) Routing based on hostname in URL (one.example & other.example.com)

  3) Routing based on Query String, Headers (example.com/users?id=123&order=false)

- ALB are a great fit for micro services & container-based application (eg. Docker & Amazon ECS)

- Has a port mapping feature to redirect to a dynamic port in ECS

 

# Target Groups

- EC2 instances (can be managed by an Auto Scaling Group) - HTTP

- ECS tasks (managed by ECS itself) - HTTP

- Lambda functions - HTTP request is translated into a JSON event

- IP Address - must be private IPs

 

- ALB can route to multiple target groups

- Health checks are at the target group level

 

# Good to know

- Fixed hostname

- The application servers don't see the IP for the client directly

  -- The true IP of the client is inserted in the header X-Forwared-For

  -- We can also get Port (X-Forwarded-Port) and proto (X-Forwarded-Proto)

 

 

# 2. NLB : Network Load Balancer (v2)

- Network load Balancers (Layer 4) allow to :

  1) Forward TCP & UDP traffic to your instances

  2) Handle millions of request per seconds

  3) Less latency ~ 100ms (vs 400 ms for ALB)

- NLB has one static IP per AZ, and supports assigning Elastic IP  (helpful for whitelisting specific IP)

- NLB are used for extreme performance, TCP or UDP traffic

- Not included in the AWS free tier

 

# Hands-on : LB

1. ALB / NLB / CLB(Classic Load Balancer) 선택

2. Step1. Configure Load Balancer 

  Listeners (LB Protocol) 선택 eg. HTTP/HTTPS/UDP/TCP

3. Step1. AZ 선택 ( can be multiple )

4. Step2. Security Group 설정

5. Step3. Target Group 설정 : Target type(Instance or IP), Protocol, Port, Health checks(timeout sec, Interval)

6. Step4. Register Targets : Add instances 

 

 

[ Load Balancer Stickiness ]

- It is possible to implement stickiness so that the same client is always redirected to the same instance behind a load balancer

- This works for Classic Load Balancers & Application Load Balancers

- The cookie used for stickiness has an expiration date you control

- Use case : make sure the user doesn't lose his session data

- Enabling stickiness may bring imbalance to the load over the backend EC2 instances

* same request originating from the same client, to go to the same target

* stickiness 설정은 Target Group 에 있음. duration 설정시 cookie expiration 까지 동일한 인스턴스에 request 하게 됨.

 

[ Cross-Zone Load Balancing ]

With Cross Zone Load Balancing : each load balancer instance distributes evenly across all registered instances in all AZ

Without Cross Zone Load Balancing : each load balancer node distributes requests evenly across the registered instances in its Availability Zone only

[ Cross-Zone LB charge ]

1) CLB (Classic Load Balancer)

- Disabled by default

- No charges for inter AZ data if enabled

2) ALB (Application Load Balancer)

- Always on (can't be disabled)

- No charges for inter AZ data

3) Network Load Balancer

- Disabled by default

- You pay charges for inter AZ data if enabled

 

 

 

 

 

반응형

[ EC2 ]

1. EC2 instances are billed by the second, t2.micro is free tier

2.On Linux/Mac we use SSH, on Windows we use Putty

3. SSH is on port 22, lock down the security group to your IP

4.Timeout Issues => Security groups issues

5. Permission issues on the SSH key => run "chmod 0400" **

6. Security Groups can reference other Security Groups instead of IP ranges ***

7. Know the difference between Private, Public and Elastic IP

8. You can customize an EC2 instance at boot time using EC2 User Data

9. Know the 4 EC2 launch modes : On demand/Reserved/Spot instances/Dedicated Hosts

10. Know the basic instance type: R, C, M, I, G, T2/T3

11. You can create AMIs to pre-install software on you EC2 => faster boot

12. AMI can be copied across regions and accounts

13. EC2 instances can be started in placement groups: Cluster/Spread/Partition

 

 

 

반응형

[ 1. ENI : Elastic Network Interfaces ]

- Virtual Network Interface

- Logical component in a VPC(Virtual Personal Cloud) that represents a virtual network card

- The ENI can have the following attributes :

  1) Primary private IPv4, one or more secondary IPv4

  2) One Elastic IP (IPv4) per private IPv4

  3) One Public IPv4

  4) One or more security groups

  5) A MAC address

- You can create ENI independently and attach them on the fly (move them) on EC2 instances for failover

- Bound to a specific AZ

[ # Hands-on : ENI 생성 및 attach/detach ]

1) EC2 인스턴스 생성

  * EC2 instance 생성시 default Network interface 는 eth0 

2) 생성한 EC2 인스턴스에 추가로 attach 할 ENI 생성하기

NETWORK & Security 탭 > Network Interfaces 메뉴 > Create Network Interface > Subnet 은 1) EC2 와 동일한 AZ로 지정 > 생성

3) 2에서 생성한 ENI 우클릭 후 Attach 선택 > attach 할 대상 EC2 Instance 선택

4) EC2 인스턴스에 eth0외에 eth1 ENI 가 생성되었는지 확인

 

 

[ 2. EC2 Hibernate ]

We can stop, terminate instances

 - Stop : the data on disk(EBS) is kept intact in the next start

 - Terminate : any EBS volumes (root) also set-up to be destroyed is lost

 

On start, the following happens :

 - First start : the OS boots & the EC2 User Data script is run

 - Following starts : the OS boots up

 - Then your application starts, caches get warmed up, and that can take time.

 

EC2 Hibernate : The in-memory(RAM) state is preserved

시스템 중지시 RAM 에 저장된 데이터를 스토리지에 저장. 시스템 재시작시 스토리지에 저장된 데이터를 로드하여 중지전 상태로 복원하는 기능

 - The instance boot is much faster (the OS is not stopped/restarted)

 - Under the hood: the RAM state is written to a file in the root EBS volume

 - The root EBS volume must be encrypted

 - Use cases :

   1) long-running processing

   2) saving the RAM state

   3) services that take time to initialize

 

* Supported instance families - C3~5, M3~5, R3~5

* Instance RAM size - must be less then 150GB

* Root Volume : must be EBS, encrypted, not instance store, and large

* Available for On-Demand and Reserved Instances

* An instance cannot be hibernated more than 60 days

 

[ # Hands-on : Hibernate 설정 ]

EC2 instance 생성시 Step3. Configure Instance Details 의 하단 부분 > Stop-Hibernate behavior : Enable hibernation as an additional stop behavior 체크 > Step4. Add Storage > EBS Encrypt 설정 > Launch

wisen.co.kr/pages/blog/blog-detail.html?idx=9920

 

 

 

반응형

[ Placement Groups ]

EC2 인스턴스 관리 전략 3가지 (Cluster, Spread, Partition)

Sometimes you want control over the EC2 Instance placement strategy

That strategy can be defined using placement groups

 

When you create a placement group, you specify one of the following strategies for the group:

1) Cluster : clusters instances into a low-latency group in a single AZ

동일한 AZ, 동일한 Rack 으로 저지연, 네트워크성능 뛰어남

- Pros : Great network (10 Gbps bandwidth between instances)

- Cons : If the rack fails, all instances fails at the same time

- Use case :

  Big Data job that needs to complete fast

  Application that needs extremely low latency and high network throughput

 

2) Spread : spreads instances across underlyng hardware (max 7 instances per group per AZ)  eg. critical applications

AZ여러군데 퍼뜨리는 방법으로, 모든 instance의 동시 처리실패 가능성을 낮춤, AZ 하나에 7개 인스턴스 제한

- Pros :

  Can span across AZ

  Reduced risk is simultaneous failure

  EC2 Instances are on different physical hardware

- Cons :

  Limited to 7 instances per AZ per placement group

- Use case :

  Application that needs to maximize high availability

  Critical Applications where each instance must be isolated from failure from each other

 

3) Partition : spreads instances across many different partitions (which rely on different sets of racks) within an AZ. Scales to 100s of EC2 instance per group (Hadoop, Cassandra, Kafka)

파티션 여러개에 인스턴스를 퍼뜨리는 방식으로, AZ 당 최대 7개의 파티션 제한, 최대 100개의 인스턴스 제약

- Up to 7 partitions per AZ

- Up to 100s of EC2 instances

- The instances in a partition do not share racks with the instances in the other partitions

- A partition failure can affect many EC2 but won't affect other partitions

- EC2 instances get access to the partition information as metadata

- Use cases :

  HDFS, HBase, Kafka, Cassandra

 

[ # Placement Group 설정 ]

1) Placement Group 생성 :

  create Placement Group > Strategy 선택 (Clustered/Spread/Partition) > 생성

2) Placement Group 설정 :

  launch Instance > choose AMI > Choose Instance Type > Configure Instance 단계에서 Placement group 선택

  * Partition 의 경우 auto distribution/혹은 파티션 지정 가능

  * Clustered Placement group 은 free tier Instance Type 에서 선택 불가

반응형

[ AMI ]

커스터마이징 인스턴스 이미지

an image to use to create our instances

- As we saw, AWS comes with base images such as : 

  Ubuntu, RedHat, Windows, ... etc

  These images can be customised at runtime using EC2 User data

- AMIs can be built for Linux or Windows machines

 

# Why would you use a custom AMI?

- Using a custom built AMI can provide the following advantages:

- Pre-installed packages needed

- Faster boot time (no need for EC2 user data at boot time)

- Machine comes configured with monitoring/enterprise software

- Security concerns - control over the machines in the network

- Control of maintenance and update of AMIs over time

- Active Directory Integration out of the box

- Installing your app ahead of time (for faster deployes when auto-scaling)

- Using someone else's AMI that is optimised for running an app, DB, etc..

 

# Using Public AMIs

- You can leverage AMIs from other people

- You can also pay for other people's AMI by the hour

  -- These people have optimised the software

  -- The machine is easy to run and configure

  -- You basically rent "expertise" from the AMI creator

- AMI can be found and published on the Amazon Marketplace

* Warning : Do not use an AMI you don't trust, Some AMIs might come with malware or may not be secure

 

[ # AMI Storage ] 

- Your AMI take space and they live in Amazone S3

- Amazon S3 is a durable, cheap and resilient storage where most of your backups will live (but you won't see them in the S3 console)

- By default, your AMIs are private, and locked for your account/region

- An AMI created for a region can only be seen in that region

- AMI is region locked and the same ID cannot be used across regions

- You can also make your AMIs public and share them with our AWS accounts or sell them on the AMI Marketplace

 

[ # AMI pricing ]

- AMIs live in Amazon S3, so you get charged for the actual space in takes in Amazon S3

- Amazon S3 pricing in US-EAST-I:

  First 50TB/month: $0.023 per GB

  Next 450TB/month:$0.022 per GB

- Overall it is quite inexpensive to store private AMIs

- Make sure to remove the AMIs you don't use

 

[ Cross Acount AMI Copy ]

AMI 는 공유가 가능.

타 계정의 AMI 를 복사하고자 할 경우 AMI 소유자가 권한을 부여해야 가능.

- You can share an AMI with another AWS account.

- Sharing an AMI doesn't affect the ownership of the AMI

- If you copy an AMI that has been shared with your account, you are the owner of the target AMI in your account

- To copy an AMI that was shared with you from another account, the owner of the source AMI must grant you read permissions for the storage that backs the AMI, either the associated EBS snapshot (for an Amazon EBS-backed AMI) or an associated S3 bucket (for an instance store-backed AMI).

 

* Limits

Windows AMI 와 같은 billingProduct AMI는 다른 계정으로부터 카피할 수 없음.

billingProduct AMI 로 EC2 인스턴스를 런칭한 후 해당 인스턴스로 AMI를 생성하여 복사하는 방식으로 복사가 가능.

  1) You can't copy an encrypted AMI that was shared with you from another account. Instead, if the underlying snapshot and encryption key were shared with you, you can copy the snapshot while re-encrypting it with a key of your own. You own the copied snapshot, and can register it as a new AMI.

  2) You can't copy an AMI with an associated billingProduct code that was shared with you from another account. This includes Windows AMIs and AMIs from the AWS Marketplace. To copy a shared AMI with a billingProduct code, launch an EC2 instance in your account using the shared AMI and then create an AMI from the instance.

 

# 내 AMI 를 기타 유저(AWS account)에게 공유하기

AMI 우클릭 > Modify Image Permissions > AWS account number 입력 및 Add Permission 클릭

* Add "create volume" permissions to the following associated snapshots when creating permissions

  - 위 옵션을 체크할 경우 AMI에 대한 직접 copy 를 허용, 체크하지 않을 경우 직접 copy 를 불허.

  - billingProduct AMI 를 카피하는 방식과 동일하게 본 AMI로 EC2 를 런칭한 후, AMI를 생성하는 방식으로 copy 가능

 

 

반응형

+ Recent posts