[ Amazon S3 - Buckets ]

- Amazon S3 allows people to store objects (files) in "buckets" (directories)

- Buckets must have a globally unique name

- Buckets are defined at the region level

- Naming convention

 1) No uppercase

 2) No underscore

 3) 3-63 characters long

 4) Not an IP

 5) Must start with lowercase letter or number 

* bucket name must be globally unique

* global console, region service

 

[ Amazon S3 - Objects ]

- Objects (files) have a key

- The key is the FULL path :

  s3://my-bucket/my_forder1/my_file.txt

- The key is composed of prefix(my_forder1/)+object name(my_file.txt)

- There is no concept of directories within buckets

- Object values are the content of the body :

   Max Object Size is 5TB

   If uploading more than 5GB, must use "multi-part upload"

- Metadata (list of text key/value pairs - system or user metadata)

- Tags (Unicode key/value pair - up to 10) - useful for security/lifecycle

- Version ID (if versioning is enabled)

 

[ Amazon S3 - Versioning ]

- You can version your files in Amazion S3

- It is enabled at the bucket level

- Same key overwrite will increment the version : 1,2,3..

- It is best practice to version your buckets

  Protect against unintended deletes

  Easy roll back to previous version

- Any file that is not versioned prior to enabling versioning will have version null (버져닝을 활성화 하기전의 버전은 null)

- Suspending(보류) versioning does not delete the previous versions

 

[ S3 Encryption for Objects ]

There are 4 methods of encrypting objects in S3

1) SSE-S3 : encrypts S3 objects using keys handled & managed by AWS

  - Object is encrypted server side

  - AES-256 encryption type

  - Must set header : "x-amz-server-side-encryption":"AES256"

2) SSE-KMS : leverage AWS key Management Service to manage encryption keys

  - encryption using keys handled & managed by KMS

  - KMS Advantages : user control + audit trail

  - Object is encrypted server side

  - Must set header : "x-amz-server-side-encryption":"aws:kms"

3) SSE-C : when you want to manage your own encryption keys

  - server-side encryption using data keys fully manged by the customer outside of AWS

  - Amazon S3 does not store the encryption key you provide

  - HTTPS must be used

  - Encryption key must provided in HTTP headers, for every HTTP request made

4) Client Side Encryption

  - Client library such as the Amazone S3 Encryption Client

  - Clients must encrypt data temselves before sending to S3

  - Clients must encrypt data temselves when retrieving from S3

  - Customer fully manages the keys and encryption cycle

 

# Encryption in transit (SSL/TLS)

- Amazon S3 exposes :

  HTTP endpoint : non encrypted

  HTTPS endpoint : encryption in flight

- You are free to use the endpoint you want, but HTTPS is recommended

- Most clients would use the HTTPS endpoint by default

* HTTPS is mandatory for SSE-C

 

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 7-3. S3 Websites : CORS, Eventual Consistency, Strong Consistency  (0) 2021.04.01
[AWS] 7-2. S3 Security  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23

EC2 Instance :

- Use a Golden AMI : Install your applications, OS dependencies etc. beforehand and launch your EC2 instance from the Golden AMI

- Bootstrap using User Data : For dynamic configuration, use User Data scripts

- Hybrid : mix Golden AMI and User Data (Elastic Beanstalk)

RDS Databases :

- Restore from a snapshot : the database will have schemas and data ready

EBS Volumes :

- Restore from a snapshot : the disk will already be formatted and have data

 

 

# Developer problems on AWS

- Managing infrastructure

- Deploying Code

- Configuring all the databases, load balancers, etc

- Scaling concerns

 

- Most web apps have the same architecture (ALB + ASG)

- All the developers wait is for their code to run

- Possibly, consistenly across different applications and environments

 

 

 

[ AWS ElasticBeanStalk ]

- ElasticBeanStalk is a developer centric view of deploying an application on AWS

- It uses all the component's we've seen before : EC2, ASG, ELB, RDS, etc...

- But it's all in one view that's easy to make sense of

- We still have full control over the configuration

- BeanStalk is free but you pay for the underlying instances

- Managed service

  -- Instance configuration/OS is handled by beanstalk

  -- Deployment strategy is configurable but performed by ElasticBeanStalk

- Just the application code is the responsibility of the developer

- Three architecture models :

  1) Single Instance deployment : good for dev

  2) LB+ASG : great for production or pre-production web applications

  3) ASG only : great for non-web apps in production (workers, etc..)

- ElasticBeanStalk has three components

  1) Application

  2) Application version : each deployment gets assigned a version

  3) Environment name (dev/test/prod..) : free naming

- You deploy application versions to environments and can promote application versions to the next environment

- Rollback feature to previous application version

- Full control over lifecycle of environments

Support for many platforms : Go, java, java with tomcat, node.js phpm python, ruby, single/multi container docker, preconfigured docker... (If not supported, you can write your custom platform)

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 7-2. S3 Security  (0) 2021.03.29
[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23
[AWS] 4-2. Aurora  (0) 2021.03.23

 

 

 

SELECT A.STATUS        -- 상태정보

      ,A.SID           -- SID 

      ,A.SERIAL#       -- 시리얼번호 

      ,A.USERNAME      -- 유저 

      ,A.OSUSER        -- OS 사용자

      ,B.SQL_TEXT      -- 쿼리

  FROM V$SESSION A

      ,V$SQLAREA B

 WHERE A.SQL_HASH_VALUE = B.HASH_VALUE 

   AND A.SQL_ADDRESS = B.ADDRESS 

   AND A.STATUS = 'ACTIVE'



taking.co.kr/96

반응형

'DB > ORACLE' 카테고리의 다른 글

[ORACLE] VARCHAR2, CLOB 사용 및 가공시 주의점  (1) 2021.07.21
[ORACLE] CLOB BLOB  (0) 2021.04.14
ORACLE 로컬 bit 확인  (0) 2021.03.16
[oracle] sysdate  (0) 2021.02.08
LEAD, LAG  (0) 2021.02.03

[ Route53 ]

- Route53 is a Managed DNS (Domain Name System)

- DNS is a collection of rules and records which helps clients understand how to reach a server through its domain name

* You pay 0.5$ per month per hosted zone

- In AWS, the most common records are :

  1) A : host name to IPv4

  2) AAAA : hostname to IPv6

  3) CNAME : hostname to hostname

  4) Alias : hostname to AWS resource

- Route 53 can use :

  public domain names you own (or buy)

  private domain names that can be resolved by your instances in your VPCs.

 

- Route 53 has advanced features such as :

  Load balancing (through DNS - also called client load balancing)

  Health checks (although limited..)

  Routing policy : simple, failover, geolocation, latency, weighted, multi value 

 

[ DNS Records TTL (Time to Live) ]

Web browser cache 가 살아있는 시간.

Web browser 는 Route 53 에 DNS 요청을 하고 도메인에 해당하는 IP와 함께 TTL을 받아 TTL 시간동안 DNS 를 캐싱한다. TTL이 다 지날 경우 다시 DNS 요청을 하여 IP를 다시 캐싱한다.

TTL 시간이 길 수록 DNS 트래픽은 줄어들고 웹브라우저가 옛날 아이피로 요청할 가능성이 높아진다. (DNS 의 A record설정을 수정할 경우)

TTL 값은 필수 DNS record

- High TTL (eg. 24 hour)

  Less traffic on DNS, Possibly outdated records

- Low TTL (eg. 60 seconds)

  More traffic on DNS, Records are outdated for less time, Easy to change records

* TTL is mandatory for each DNS record

 

[ CNAME vs Alias ]

CNAME 은 도메인(호스트) 호출시 다른 호스트명으로 리다이렉트, 유료, 도메인은 사용 불가, 유료

Alias 는 도메인(호스트) 호출시 AWS 리소스로 리다이렉트, root 도메인도 사용 가능, 무료

CNAME :

- Points a hostname to any other hostname (app.mydomain.com > blabla.anything.com)

- only for Non Root domain (eg. something.mydomain.com)

- not free

Alias :

- Points a hostname to an AWS Resource (app.mydomain.com > blabla.amazonaws.com)

- Works for Root domain and non root domain (eg. mydomain.com)

- Free of charge

- Native health check

 

[ Simple Routing Policy ]

1개의 CNAME/Alias 에 1개의 A record 지정한 1:1 관계. health check 사용 불가

1개의 CNAME/Alias 에 2개 이상의 A record 가 지정되있을 경우 client 가 랜덤으로 IP 선택

- Use when you need to redirect to a single resource

- You can't attach health checks to simple routing policy

* If multiple values are returned, a random one is chosen by the client

 (=client side load balancing)

 

[ Weighted Routing Policy ]

A record 마다 가중치를 다르게 주어 트래픽을 분산하는 정책

- Control the % of the requests that go to specific endpoint

- Helpful to test 1% of traffic on new app version for example

- Helpful to split traffic between two regions

- Can be associated with Health Checks

 

[ Latency Routing Policy ]

최저응답시간을 갖는 A record 로 리다이렉트 시키는 정책

(eg. 한국/미국/영국 region 의 인스턴스를 latency routing policy 를 적용하여 하나의 CNAME 의 A record 로 지정한 후 서울에서 DNS 요청시 한국 A record 의 인스턴스가 응답함)

- Redirect to the server that has the least latency close to us

- Super helpful when latency of users is a priority

- Latency is evaluated in terms of user(사용자 측면에서) to designated(지정된) AWS Region (유저마다 최저응답시간을 갖는 호스트로 라우팅됨)

- Germany may be directed to the US (if that's the lowest latency)

 

[ Health Checks ]

설정한 Check Interval 의 수만큼 연속으로 instance (IP) 에 ping 을 날려 instance 의 상태를 파악

- Have 3 (default value is 3) health checks failed => unhealthy

- After 3 (default value is 3) health checks passed => health

- Default Health Check Interval : 30s (can set to 10s - higher cost)

- About 15 health checkers will check the endpoint health

   => one request every 2 seconds on average

- Can have HTTP, TCP and HTTPS health checks (no SSL verification)

- Possibility of integrating the health check with CloudWatch 

* Health checks can be linked to Route53 DNS queries

 

[ Failover Routing Policy ]

1. Web browser 가 Route53 에 DNS 요청

2. Route 53 은 primary instance에 Health check

3. Primary instance 가 unhealthy 할 경우 secondary instance (DR(disaster recovery)) 에 요청

 

[ Geolocation Routing Policy ]

지역설정을 하여 해당 지역에서 오는 request 는 특정 A record 의 instance 가 처리

지정하지 않은 지역으로부터 요청이 올 경우 default 로 설정해놓은 A record 의 instance 가 처리

- Different from Latency based

- This is routing based on user location

- Here we specify : traffic from the UK should go to this specific IP

* Should create a "default" policy (in case there's no match on location)

 

[ Multi Value Routing Policy (=client side load balancing) ]

동일한 DNS 에 A record 를 최대 8개 까지 설정

client 에서 Route 53 에 DNS 요청시 healthy 한 instance 만 response

client 는 healthy 한 instance 중에서 하나의 instance에 random 하게 요청

- Use when routing traffic to multiple resources

- Want to associate a Route 53 health checks with records

- Up to 8 healthy records are returned for each Multi Value query

* Multi Value is not a substitute for having an ELB

 

[ # Hands-on : Route53 에 record, health check 설정 방법 ]

1. health check 생성 (instance IP or Domain 입력)

2. Route 53 의 record 생성

- Name : sample.testaws.com  (sample 이 Record set 의 name 이자 domain 이 됨)

- Type : A record ( IPv4 )

- TTL : IP 유효시간 설정

- Value : Type의 value 로, A record 선택시 인스턴스의 IPv4 입력

- Routing Policy : simple(단일 A record), failover, geolocation, latency, weighted, multi value.. 선택

3. 선택한 record 의 Routing Policy 에 따라 Associate with Health check 옵션 Yes 로 선택 및 Health Check 선택

: 위와 같이 설정시 client 는 DNS 요청을 Route53 에 하며 health check 를 통해 주기적으로 ping 을 하여 IP의 instance 가 healthy/unhealty 한지 파악. 인스턴스의 상태에 따라 선택한 Routing Policy 에 따라 다르게 동작

 

 

[ Route 53 as a Registrar ]

Rregistrar 는 예약된 Internet domain names을 관리하는 조직

- A domain name registrar is a organization that manages the reservation of Internet domain names

(eg. Google Domains, and also Route53(AWS))

* Domain Registrar != DNS (but each domain registrar usually comes with some DNS features)

 

# 3rd Party Registrar with AWS Route 53

3rd Party 에서 AWS Route53 의 DNS 서버 사용하기

1) 3rd Party (ex: Google) 가 제공하는 Name Server 대신 Custom Name Server 를 사용하도록 설정

2) 이때 Custom Name Server 는 Route53 에서 생성한 Hosted Zone 의 Name Server 로 설정 (Hosted Zone 생성 후 Hosted Zone 클릭시 노출되는 Details 정보 안에 Name Server 정보가 있음)

- If you buy your domain on 3rd party website, you can still use Route53

1) Create a Hosted Zone in Route53

2) Update NS Records on 3rd party website to use Route53 name servers

 

 

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23
[AWS] 4-2. Aurora  (0) 2021.03.23
[AWS] 4-1. RDS, Read Replicas, DR  (0) 2021.03.22

[ AWS ElastiCache ]

- The same way RDS is to get managed Relational Databases

- ElastiCache is to get managed Redis or Memcached

- Caches are in-memory databases with really high performance, low latency

- Helps reduce load off of databases for read intensive workloads

- Helps make your application stateless

- Write Scaling using sharding (파편화)

- Read Scaling using Read Replicas

- Multi AZ with Failover Capability

- AWS takes care of OS maintenance/patching, optimizations, setup, configuration, monitoring, failure recovery and backups

 

[ ElastiCache Solution Architecture - DB Cache ]

app 은 elasticache 에 우선적으로 쿼리한 후 존재하지 않을 경우(miss) RDS 에서 SELECT, cache 에 write

다음번 동일한 데이터를 읽을 땐 캐시에 존재 (hit)]

 

[ ElastiCache Solution Architecture - User Session Store ]

앱에 로그인을 한 후 session data 를 Elasticache 에 저장.

유저가 다른 인스턴스에서 접속 할 경우 elasticache 에서 세션정보를 가져와 로그인 유지상태로 만듬. 

매번 인증이 필요없음.

 

[ Redis vs Memcached ]

* Redis (RDS와 비슷) 

 - Multi AZ with Auto-Failover

 - Read Replicas to scale reads and have high availability

 - Data Durability using AOF persistence

 - Backup and restore features

 

* Memcached

 - Multi-node for partitioning of data (sharding)

 - Non persistent

 - No backup and restore

 - Multi-threaded architecture

 

[ ElastiCache - Cache Security ]

1. All caches in ElastiCache :

  - Support SSL in flight encryption

  - Do not support IAM authentication *** 

  - IAM policies on ElastiCache are only used for AWS API-level security

2. Redis AUTH

  - You can set a pw/token when you create a Redis cluster

  - This is an extra level of security for your cache (on top of security groups)

3. Memcached

  - Supports SASL-based authentication (advanced)

 

[ # ElastiCache for Solutions Architects ] 

캐시데이터를 읽는 경우 캐시에 저장된 데이터는 방금 꺼내온 데이터가 아니므로 stale 함. (Lazy Loading)

DB에서 데이터를 쓸 경우 cache 에도 추가 및 수정을 한다 (Write Through)

Patterns for ElastiCache

- Lazy Loading : all the read data is cached, data can become stale(오래된) in cache

- Write Through : Adds or update data in the cache when written to a DB (no stale data)

- Session Store : store temporary session data in a cache (using TTL features)

 

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24
[AWS] 4-2. Aurora  (0) 2021.03.23
[AWS] 4-1. RDS, Read Replicas, DR  (0) 2021.03.22
[AWS] 3-2. EBS Snapshots, EFS, Instance Storage  (0) 2021.03.20

[ Aurora ]

- Aurora is a proprietary technology from AWS (not open sourced)

- Postgres and MySQL are both supported as Aurora DB (that means your drivers will work as if Aurora was a Postgres or MySQL database)

- Aurora is "AWS cloud optimized" and claims 5x performance improvement over MySQL on RDS, over 3x the - performance of Postgres on RDS

- Aurora storage automatically grows in increments of 10GB, up to 64TB

- Aurora can have 15 replicas while MySQL has 5, and the replication process is faster (sub 10ms replica lag)

- Failover in Aurora is instantaneous(즉각적인). It's High Availability native

- Aurora costs more than RDS(20% more) - but is more efficient

 

# Aurora High Availability and Read Scaling

- 6 copies of your data across 3 AZ :

  -- 4 copies out of 6 needed for writes

  -- 3 copies out of 6 need for reads

  -- Self healing with peer-to-peer replication

  -- Storage is striped across 100s of volumes

- One Aurora Instance takes writes (master)

- Automated failover for master in less than 30 seconds

- Master + up to 15 Aurora Read Replicas serve reads

- Support for Cross Region Replication

 

 

[ Aurora DB Cluster ]

쓰기(Master를 통해)와 읽기(Read Replicas를 통해)는 각각의 endpoint 를 통해 수행

[ Aurora Security ]

- Similar to RDS because uses the same engines

- Encryption at rest using KMS

- Automated backups, snapshots and replicas are also encrypted

- Encryption in flight using SSL (same process as MySQL or Postgres)

- Possibility to authenticate using IAM token (same method as RDS)

- You are responsible for protecting the instance with security groups

- You can't SSH

 

[ Aurora Serverless ]

로드량이 많아지면 Aurora database 가 추가적으로 생성됨. 반대로 적어지면 줄어듬

- Automated database instantiation and auto-scaling based on actual usage

- Good for infrequent, intermittent(간헐적인) or unpredictable workloads

- No capacity planning needed

- Pay per second, can be more cost-effective

 

[ Global Aurora ]

1개의 마스터 region, 최대 5개의 서브 region, region 당 최대 16개의 read replicas

- Aurora Cross Region Read Replicas :

  Useful for disaster recovery

  Simple to put in place

- Aurora Global Database (recommended) :

  1 Primary Region (read/write)

  Up to 5 secondary (read-only) regions, replication lag is less then 1 second

  Up to 16 Read Replicas per secondary region

  Helps for decreasing latency

  Promoting another region (for disaster recovery) has an RTO(recovery time object) of < 1 minute

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 5-1. Route 53  (0) 2021.03.24
[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23
[AWS] 4-1. RDS, Read Replicas, DR  (0) 2021.03.22
[AWS] 3-2. EBS Snapshots, EFS, Instance Storage  (0) 2021.03.20
[AWS] 3-1. EBS  (0) 2021.03.19

 

 

윈도우 nas 마운트

redmilk.co.kr/archives/2483

 

맥 os nas 마운트

lightinglife.tistory.com/205

blog.naver.com/PostView.nhn?blogId=neces2&logNo=220881460998&categoryNo=11&parentCategoryNo=0&viewDate=&currentPage=1&postListTopCurrentPage=1&from=search

 

반응형

[ RDS : Relational Database Service ]

AWS 에서 관리되는 SQL DB service

- It's a managed DB service for DB use SQL as a query language.

- It allows you to create databases in the cloud that are managed by AWS  

  MySQL, MariaDB, Aurora(AWS), Oracle...

 

# Advantage over using RDS versus deploying DB on EC2

DB를 EC2에서 직접 띄우지 않고 RDS를 사용했을 때의 이점

failover를 위한 replica 설정, 읽기 성능 향상을 위한 read replica 설정, 백업 및 특정 시점으로 복원 가능

RDS is a managed service

- Automated provisioning((대비)실시간으로 자원 할당하여 사용), OS patching

- Continuous backups and restore to specific timestamp (Point in Time Restore)

- Monitoring dashboards

- Read replicas for improved read performance

- Multi AZ setup for DR (Disaster Recovery)

- Maintenance windows for upgrades

- Scaling capability (vertical and horizontal)

- Storage backed by EBS (GP2 or IO1)

* But you can't SSH into your instances

 

# RDS Backups

자동 백업이 가능. Snapshot 사용 가능

1) Backups are automatically enabled in RDS

2) Automated backups :

  - Daily full backup of the database (during the maintenance window)

  - Transaction logs are backed-up by RDS every 5 minutes

     => ability to restore to any point in time (from oldest backup to 5 minutes ago) 

  - 7 days retention(보유) (can be increased to 35 days)

3) DB Snapshots :

- Manually triggered by the user

- Retention of backup for as long as you want

 

 

[ RDS - Read Replicas for read scalability ]

5개 까지 사용 가능, AZ/Region 상관없이 사용 가능, replica 가 master 가 될 수 있음

Async 비동기 방식

- Up to 5 Read Replicas

- Within AZ, Cross AZ or Cross Region

- Replication is Async, so reads are eventually consistent

- Replicas can be promoted to their own DB

- Applications must update the connection string to leverage(사용) read replicas

* Multi AZ keeps the same connection string regardless of which database is up. Read Replicas imply we need to reference them individually in our application as each read replica will have its own DNS name

Multi AZ 는 커넥션 스트링을 항상 같게 유지하지만 Read Replicas는 각각 자신만의 DNS 를 가지게 되므로 Read Replicas 에 대한 커넥션 스트링 앱에서 바꿔야함

 

# Read Replicas Use cases

분석 프로그램을 돌리기 위해 RDS read replica 를 생성하여 read replica 을 바라보게 설정.

원래의 app 엔 영향을 미치지 않음.

1) You have a production database that is taking on normal load

2) You want to run a reporting application to run some analytics

3) You create a Read Replica to run the new workload there

4) The production application is unaffected

5) Read replicas are used for SELECT only kind of state ments (NOT I/U/D)

 

# Read Replicas Network Cost

동일한 AZ내의 Replicas 에선 사용요금이 발생하지 않음.

In AWS there's a network cost when data goes from one AZ to another

To reduce the cost, you can have your Read Replicas in the same AZ (Free)

 

 

# RDS Multi AZ (Disaster Recovery)

싱크 복제

읽거나 쓰기 용도가 아닌 백업용도 (스케일링 용도 아님)

모든 데이터가 복제 slave 에도 쓰이게 됨.

마스터가 죽으면 slave가 마스터가 되어 failover.

복수개의 AZ 에서 세팅될 수 있음

- Sync replication

- One DNS name - automatic app failover to standby

- Increase availability

- Failover in case of loss of AZ, loss of network, instance or storage failure

- No manual intervention(끼어듬) in apps

- Not used for scaling

* The Read Replicas be setup as Multi AZ for Disaster Recovery(DR)***

 

[ RDS Security : 1. Encryption ]

RDS 보안 : 암호화

1. At rest encryption

KMS 를 사용하여 암호화 가능

런칭시 암호화 정의되어있어야함.

마스터가 암호화되어있지 않을 경우 Read Replica 또한 암호화 될 수 없음

- Possibility to encrypt the master & read replicas with AWS KMS - AES-256 encryption

- Encryption has to be defined at launch time

- If the master is not encrypted, the read replicas cannot be encrypted

- TDE(Transparent Data Encryption) available for Oracle and MS SQL Server

 

2. In flight encryption

- SSL certificates to encrypt data to RDS in flight

- Provide SSL options with trust certificate when connecting to database

- To enforce SSL:

  -- PostgreSQL : rds.force_ssl=1 in the AWS RDS Console (Parameter Groups)

  -- MySQL : GRANT USAGE ON *.* TO 'mysqluser'@'%' REQUIRE SSL; (Within the DB)

 

# RDS Encryption Operations

Encrypting RDS backups

- Snapshots of un-encrypted RDS databases are un-encrypted

- Snapshots of encrypted RDS databases are encrypted

- Can copy a snapshot into an encrypted one

 

To encrypt an un-encrypted RDS database :

1) Create a snapshot of the un-encrypted database

2) Copy the snapshot and enable encryption for the snapshot

3) Restore the database from the encrypted snapshot

4) Migrate applications to the new database, and delete the old database

: unencrypted DB => snapshot => copy snapshot as encrypted => create DB from snapshot

 

[ RDS Security : 2. Network & IAM ]

Network Security

- RDS databases are usually deployed within a private subnet, not in a public one

- RDS security works by leveraging security groups (the same concept as for EC2 instances) - it controls which IP/security group can communicate with RDS

 

Access Management

- IAM policies help control who can manage AWS RDS (through the RDS API)

- Traditional Username and Password can be used to login into the database

- IAM-based authentication can be used to login into RDS MySQL & PostgreSQL

 

# RDS - IAM Authentication

- IAM database authentication works with MySQL and PostgreSQL

- You don't need a password, just an authentication token obtained through IAM & RDS API calls

- Auth token has a lifetime of 15 minutes

* Benefits :

  - Network in/out must be encrypted using SSL

  - IAM to centrally manage users instead of DB

  - Can leverage IAM Roles and EC2 Instance profiles for easy integration

 

참고:

https://wbluke.tistory.com/58

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 4-3. ElastiCache, Redis, MemCached  (0) 2021.03.23
[AWS] 4-2. Aurora  (0) 2021.03.23
[AWS] 3-2. EBS Snapshots, EFS, Instance Storage  (0) 2021.03.20
[AWS] 3-1. EBS  (0) 2021.03.19
[AWS] 2-3. ASG  (0) 2021.03.18

[ EBS Snapshots *** ]

Snapshot 생성시 데이터 백업과 유사

Snapshot 은 AZ / Region 에 제약이 없음

Snapshot 생성시 IO 를 사용하므로 앱에 부하가 있을 땐 생성하면 안됨

Snapshot 은 S3 에 저장됨

Amazon Data Lifecycle Manager을 사용하여 Snapshot 을 주기적으로 생성 할 수 있음(Scheduling)

- Incremental - only backup changed blocks

- EBS backups use IO and you shouldn't run then while your application is handling a lot of traffic

- Snapshots will be stored in S3 (but you won't directly see them)

- Not necessary to detach volume to do snapshot, but recommended

- Max 100000 snapshots

- can copy snapshots across AZ or Region

- Can make AMI from Snapshot

- EBS volumes restored by snapshots need to be pre-warmed (using fio or dd command to read the entire volume)

- Snapshots can be automated using Amazon Data Lifecycle Manager

 

[ EBS Migration ]

Snapshot 을 생성하여 Snapshot 을 통해 volume 을 생성하는 방식으로 AZ 제약을 해소할 수 있음

- EBS volumes are only locked to a specific AZ

- To migrate it to a different AZ (or region) :

  1) Snapshot the volume

  2) (optional) Copy the volume to a different region

  3) Create a volume from the snapshot in the AZ of your choice

[ EBS Encryption ]

Snapshot을 암호화 한 후 volume 을 생성 할 경우 volume 도 암호화 됨

- When you create an encrypcted EBS volume, you get the following :

  -- Data at rest is encrypted inside the volume

  -- All the data in flight moving between the instance and the volume is encrypted

  -- All snapshots are encrypted

  -- All volumes created from the snapshot

- Encryption and decryption are handled transparently (you have nothing to do)

- Encryption has a minimal impact on latency

- EBS Encryption leverages keys from KMS (AES-256)

- Copying an unencrypted snapshot allows encryption

- Snapshots of encrypted volumes are encrypted

 

[ # Encryption : encrypt an unencrypted EBS volume ]

암호화 되어있지 않은 EBS 를 암호화 하는 방법

- Create an EBS snapshot of the volume

- Encrypt the EBS snapshot (using copy)

- Create new EBS volume from the snapshot (the volume will also be encrypted)

- Now you can attach the encrypted volume to the original instance

 

[ EBS vs Instance Store ]

Instance Store (내장 물리디스크)가 EBS 에 비해 IO 성능이 뛰어나며 buffer/cache 등의 사용엔 유리할 수 있으나 인스턴스를 stop/termination 할 경우 사라짐. 

- Some instance do not come with Root EBS volumes

- Instead, they come with "Instance store" (= ephemeral(단명하는) storage)

- Instance store is physically attached to the machine (EBS is a network drive)

* Pros of Instance Store :

  - Better I/O performance

  - Good for buffer/cache/scratch data temporary content

  - Data survives reboots

* Cons :

  - On stop or termination, the instance store is lost

* Local EC2 Instance Store

  - Physical disk attached to the physical server where your EC2 is

  - Very High IOPS (because physical)

  - Disks up to 7.5 TB (can change over time), stripped to reach 30 TB (can change over time)

  - Block Storage (just like EBS)

  - Cannot be increased in size

  - Risk of data loss if hardware fails

 

[ EBS RAID configurations ]

# RAID 0 : extension (스트라이핑)

디스크 공간 확장

- Combining 2 or more volumes and getting the total disk space and I/O

- one disk fails, all the data is failed

- An application that needs a lot of IOPS and doesn't need fault-tolerance

- A database that has replication already built-in

- Using this we can have a very big disk with a lot of IOPS

# RAID 1 : instance fault tolerance, mirroring (미러링 설정)

디스크 미러링으로 안정성 향상

- Mirroring a volume to another

- If one disk fails, our logical volume is still working

- Send the data to two EBS volume at the same time

- Application that need increase volume fault tolerance

- Application where you need to service disks

 

 

[ EFS - Elastic File System ]

EBS 와 달리 multi AZ 에서 사용 가능

고성능, 고비용

Linux AMI 에서 사용이 가능 (Windows X)

- Managed NFS (network file system) that can be mounted on many EC2

- EFS works with EC2 instances in multi-AZ

- Highly available, scalable, expensive (gp2 3배), pay per use

- Use cases: content management, web serving, data sharing, Wordpress

- Uses NFSv4.1 protocol

- Uses security group to control access to EFS

- Compatible with Linux based AMI (not Windows)

- Encryption at rest using KMS

- POSIX file system(Linux) that has a standard file API

- File system scales automatically, pay-per-use, no capacity planning

 

# Performance & Storage Classes

EFS 는 자주 access 하지 않는 파일을 주기적으로 EFS-IA (Infrequent access)로 이동시켜 비용 절감을 할 수 있다

1) EFS Scale

  - 1000s of concurrent NFS clients, 10 GB+/s throughput

  - Grow to Petabyte-scale network file system, automatically

2) Performance mode (set at EFS creation time)

  - General purpose (default): latency-sensitive use cases (web server, CMS, etc..)

  - Max I/O - higher latency, throughput, highly parallel (big data, media processing)

3) Storage Tiers (lifecycle management feature - move file after N days) ***

  - Standard : for frequently accessed files

  - Infrequent access(EFS-IA) : cost to retrieve files, lower price to store

 

 

[ EBS (Elastic Block System) vs EFS (Elastic File System)]

EBS 와 EFS 의 차이

1. EBS volumes

한번에 하나의 EC2 인스턴스에 마운트 가능

multi AZ 불가

AZ 간 마이그레이션을 원할 경우 snapshot 생성을 통해 가능

 - can be attached to only one instance at a time

 - are locked at the AZ level

 - IO1 : can increase IO independently

 - GP2 : IO increases if the disk size increases

To migrate an EBS volume across AZ

 - Take a snapshot

 - Restore the snapshot to another AZ

 - EBS backups use IO and you shouldn't run them while your application is handling a lot of traffic

Root EBS volumes instances get terminated by default if the EC2 instance gets terminated. (you can disable that)

 

2. EFS

100여개의 EC2 인스턴스에 마운트 가능

multi AZ 가능

Linux 인스턴스에만 사용 가능

EBS 보다 고성능/고비용

- Mounting 100s of instances across AZ

- EFS share website files (WordPress)

- Only for Linux Instances (POSIX)

- EFS has a higher price point than EBS

- Can leverage EFS-IA for cost saving

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 4-2. Aurora  (0) 2021.03.23
[AWS] 4-1. RDS, Read Replicas, DR  (0) 2021.03.22
[AWS] 3-1. EBS  (0) 2021.03.19
[AWS] 2-3. ASG  (0) 2021.03.18
[AWS] 2-2. LB types (CLB, ALB, NLB), Stickiness, SSL/SNI, ELB  (0) 2021.03.16

[ 1. EBS : Elastic Block Store ]

EC2 는 제거될 때 root volume 이 함께 제거된다.

EBS 는 명칭만 다를뿐 NAS 와 유사

- An EC2 machine loses its root volume (main drive) when it is manually terminated.

- Unexpected terminations might happen from time to time (AWS would email you)

- Sometimes, you need a way to store your instance data somewhere

- An EBS Volume is a network drive you can attach to your instances while they run

- It allows your instances to persist data

 

[ EBS Volume ]

내장 물리디스크가 아닌 네트워크 드라이브

서버 러닝중에도 제거/추가가 가능

AZ간 이동시 snapshot 생성을 통해 이동이 가능

- It's a network drive (not a physical drive)

  -- It uses the network to communicate the instance, which means there might be a bit of latency

  -- It can be detached from an EC2 instance and attached to another one quickly

- It's locked to an AZ

  -- To move a volume across, you first need to snapshot it

- Have a provisioned capacity (size in GBs, and IOPS(I/O Ops Per Sec))

  -- You get billed for all the provisioned capacity

 

[ EBS Volume Types ]

EBS Volume 은 세가지 유형이 존재

- EBS Volumes are characterized in Size/Throughput/IOPS (I/O Ops Per Sec)

- Only GP2 and IO1 can be used as boot volumes

1) IO1 (SSD)

고성능 SSD volume

Highest-performance SSD volume for mission-critical low-latency or high-throughput workloads

- Critical business applications that require sustained IOPS performance, or more than 16000 IOPS per volume (GP2 limit)

- for Large database workloads (eg. MongoDB, Oracle, MySql)

  * GB range : 4GB ~ 16TB

  * MIN IOPS : 100

  * MAX IOPS : 64000 (for Nitro instances) or 32000 (other instances)

  * GB per IOPS : 50 IOPS per GB

 

2) GP2 (SSD)

일반 용도의 SSD Volume

General Purpose SSD volume that balances price and performance for a wide variety of workloads

- Recommended for most workloads

- System boot volumes

- Virtual desktops

- Low-latency interactive apps

- Development and test environments

  * GB range : 1GB ~ 16TB (Small GP2 volumes can burst IOPS to 3000)

  * MAX IOPS : 16000 

  * GB per IOPS : 3IOPS per GB (means at 5334GB are at the max IOPS)

 

3) ST1 (HDD)

저가형 HDD Volume

Low cost HDD volume designed for frequently accessed, throughput-intensive workloads

- Streaming workloads requiring consistent, fast throughput at a low price

- Big Data, Data warehouses, Log processing

- Apache Kafka

- Cannot be a boot volume

  * GB range : 500GB ~ 16TB

  * MIN IOPS : 500

  * MAX throughput : 500MB/s (can burst)

 

4) SC1 (HDD)

최저가형 HDD Volume

Lowest cost HDD volume designed for less frequently accessed workloads

- Throughput-oriented storage for large volumes of data that is infrequently accessed

- Scenarios where the lowest storage cost is important

- Cannot be a boot volume

  * GB range : 500GB ~ 16TB

  * MIN IOPS : 250

  * MAX throughput : 250MB/s (can burst)

 

[ # Hands-On ]

1. How to mount

1) EBS volume 생성 : EC2 Instance 생성시 Step 4 Add Storage 에서 EBS 설정이 가능

2) 마운트 상태 확인

> lsblk 

3) 드라이브에 파일시스템 존재여부 확인

> sudo file -s /dev/{drivename}

4) 파일시스템 생성

> sudo mkfs -t ext4 /dev/{drivename}

5) 경로 생성

> sudo mkdir /data

6) 마운트 시키기

> sudo mount /dev/xvdb /data

7) 마운트 확인

> lsblk

8) 마운트 된 경로에 마운트 테스트용 파일 생성

> sudo touch /data/hello.txt

9) fstab 수정

> sudo nano /etc/fstab 

/dev/{drivename} /data ext4 defaults,nofail 0 2     (현재 마운트 정보 입력 참고)

* fstab : 파일시스템 테이블. 시스템이 리부팅되어도 이곳에 마운트 정보가 남아있어 자동으로 부팅시 마운트됨. 

10) 파일시스템 확인

> sudo file -s /dev/{drivename}

 

2. unmount

> sudo umount /data

 

3. fstab 이 설정 된 이후 mount

> sudo mount -a

반응형

+ Recent posts