[ Snowball ]

- Physical data transport solution that helps moving TBs or PBs of data in or out of AWS

- Alternative to moving data over the network (and paying network fees)

- Secure, tamper resistant, uses KMS 256 bit encryption

- Tracking using SNS and text messages, E-ink shipping label

- Pay per data transfer job

ex) large data cloud migrations, DC decommission, DR

     If it takes more than a week to transfer over the network, use Snowball devices

 

[ Snowball : process ]

1. Request snowball devices from the AWS console for delivery

2. Install the snowball client on your servers

3. Connect the snowball to your servers and copy files using the client

4. Ship back the device when you're done (goes to the right AWS facility)

5. Data will be loaded into an S3 bucket

6. Snowball is completely wiped

7. Tracking is done using SNS, text messages and the AWS console

 

[ Snowball Edge ]

- Snowball Edges add computational capability to the device

- 100TB capacity with either :

  1) Storage optimized - 24vCPU

  2) Compute optimized - 52 vCPU & optional GPU

- Supports a custom EC2 AMI so you can perform processing on the go

- Supports custom Lambda functions

- Very useful to pre-process the data while moving

ex) data migration, image collation, IoT capture machine learning

 

[ Snowmobile ]

- Transfer exabytes of data (1 EB = 1000 PB = 1000000 TBs)

- Each Snowmobile has 100 PB of capacity (use multiple in parallel)

- Better than Snowball if you transfer more than 10 PB

 

[ Snowball into Glacier ]

스노우볼 데이터를 Glacier 로 바로 옮길 수 없으며 S3에 올린 후 lifecycle 정책에 의해 Glacier 로 이동되게 해야함

- Snowball cannot import to Glacier directly

- You have to use Amazon S3 first, and an S3 lifecycle policy

 

 

반응형

[ CloudFront Signed URL / Signed Cookies ]

- You want to distribute paid shared content to premium users over the world

- We can use CloudFront Signed URL/Cookie. We attach a policy with :

   1) includes URL expiration

   2) includes IP ranges to acecss the data from

   3) trusted signers (which AWS accounts can create signed URLs)

- How long should the URL be valid for?

  -- Shared content (movie, music) : make it short (a few minutes)

  -- Private content (private to the user) : you can make it last for years

 

* Signed URL = access to individual files (one signed URL per file)

* Signed Cookies = access to multiple files (one signed cookie for many files)

 

CloudFront Signed URL Diagram

1. Client 는 application 에 인증(authentication)

2. App은 AWS SDK 를 사용하여 Signed URL 을 생성, Client 에 리턴

3. Client 는 Signed URL 을 통해 CloudFront -> S3 Object 에 접근

 

[ CloudFront Signed URL vs S3 Pre-Signed URL ]

CloudFront Signed URL 은 S3 에 CloudFront Edge 를 통해 접근

S3 Pre-Signed URL 은 S3 에 직접 접근 (IAM 사용)

1. CloudFront Signed URL

- Allow access to a path, no matter the origin

- Account wide key-pair, only the root can manage it

- Can filter by IP, path, date, expiration

- Can leverage caching features

 

2. S3 Pre-Signed URL

- Issue a request as the person who pre-signed the URL

- Uses the IAM key of the signing IAM principal

- Limited lifetime

 

 

 

[ AWS Global Accelerator ]

[ Global users for our application ]

Global 서비스에 public internet을 사용하여 접속하는 client 들은 수많은 hop 을 거치며 app에 도달하므로 지연 발생

- You have deployed an application and have global users who want to access it directly

- They go over the public internet, which can add a lot of latency due to many hops

- We wish to go as fast as possible through AWS network to minimize latency

 

# Unicast IP vs AnyCast IP

Anycast IP는 모든 서버가 동일한 IP주소를 사용하며 클라이언트는 가장 가까운 곳에 routing 되는 방식

Unicast IP : one server holds one IP address

Anycast IP : all servers hold the same IP address and the client is routed to the nearest one

 

[ AWS Global Accelerator ]

client는 public internet 대신 edge location을 통하여 AWS internal network 로 app에 접근

- Leverage the AWS internal network to route to your application

- 2 Anycast IP are created for your application

- The Anycast IP send traffic directly to Edge Locations

- The Edge locations send the traffic to your application

- Works with Elastic IP, EC2 instances, ALB, NLB, public or private

- Consistent Performance

  1) Intelligent routing to lowest latency and fast regional failover

  2) No issue with client cache (because the IP doesn't change)

  3) Internal AWS network

- Health Checks

  1) Global accelerator performsa health check of your applications

  2) Helps make your application global (failover less then 1 minute for unhealthy)

  3) Grate for DR 

- Security

  1) only 2 external IP need to be whitelisted

  2) DDoS protection thanks to AWS Shield

 

[ AWS Global Accelerator vs CloudFront ]

Both :

1) use the AWS global network and its edge locations around the world

2) integrate with AWS Shield for DDoS protection

Differences : 

CloudFront

- Improves performance for both cacheable content (ex: images and videos)

- Dynamic content (ex: API acceleration and dynamic site delivery)

- Content is served at the edge

Global Accelerator

- Improves performance for a wide range fo applications over TCP or UDP

- Proxying packets at the edge to applications running in one or more AWS Regions

- Good fit for non-HTTP use cases, such as gaming(UDP), IoT(MQTT), or Voice over IP

- Good for HTTP use cases that require static IP addresses

- Good for HTTP use cases that required deterministic, fast regional failover

 

# Hands-On : Global Accelerator

1. Endpoint 로 지정할 Instance 복수개 생성

2. Global accelerator 생성 

1) endpoint groups 지정 - region 지정

2) region 별 instance 지정(1에서 생성한 instance 지정)

 

 

 

 

반응형

[ AWS CloudFront ]

한국 유저가 호주 S3 bucket 의 컨텐츠 요청시 한국에서 가까운 edge(eg. 도쿄) 에서 cached 된 데이터를 가져옴

- Content Delivery Network (CDN)

- Improves read performance, content is cached at the edge

- 216 Point of Presence globally (edge locations)

- DDos protection, integration with Shield, AWS Web application firewall

- can expose external HTTPS and can talk to internal HTTPS backends

 

[ CloudFront - Origins ]

S3 bucket / Custom origin 에 CloudFront 만 접속/접근을 허용하게 설정(OAI)하여 보안성 향상

1. S3 bucket 

- For distributing files and caching them at the edge

- Enhanced security with CloudFront Origin Access Identity (OAI)

- CloudFront can be used as an ingress (to upload files to S3)

2. Custom Origin (HTTP)

- Application Load Balancer

- EC2 instance

- S3 website (must first enable the bucket as a static S3 website)

- Any HTTP backend you want

 

# CloudFront at a high level

 

# CloudFront - S3 as an Origin

 

# CloudFront - ALB or EC2 as an origin

 

[ CloudFront Geo Restriction ]

- You can restrict who can access your distribution

- can use Whitelist/Blacklist

- The country is determined using a 3rd party Geo-IP database

  ex. Copyright Laws to control access to content

 

[ CloudFront vs S3 Cross Region Replication ]

1) CloudFront :

- Global Edge network

- Files are cached for a TTL (maybe a day)

- Great for static content that must be available everywhere

2) S3 Cross Region Replication :

- Must be setup for each region you want replication to happen

- Files are updated in near real-time

- Read only

- Great for dynamic content that needs to be available at low-latency in few regions

 

 

 

반응형

[ S3 Performance ]

오토스케일링이 되며, request 는 prefix 마다 받을 수 있는 양이 있으므로 prefix 를 늘려 성능을 향상시킬 수 있음

- Amazon S3 automatically scales to high request rates, latency 100-200ms

- Your application can achieve at lease 3500 PUT/COPY/POST/DELETE and 5500 GET/HEAD requests per second per prefix in a bucket

- There are no limits to the number of prefixes in a bucket

- prefix Example (object path -> prefix) :

  1) bucket/forder1/sub1/file -> prefix : /folder1/sub1/

  2) bucket/forder1/sub2/file -> prefix : /folder1/sub2/

  3) bucket/1/file -> prefix : /2/

  4) bucket/2/file -> prefix : /2/

 

* If you spread reads across all four prefixes evenly, you can achieve 22000 requests per second for GET and HEAD

 

[ S3 KMS Limitation ]

SSE-KMS encryption 사용시 KMS 암복호화로 인해 성능에 문제가 생길 수 있음

- If you use SSE-KMS, you may be impacted(영향받는) by the KMS limits

- When you upload, it calls the GenerateDataKey KMS API

- When you download, it calls the Decrypt KMS API

- Count towards the KMS quota per second (5500, 10000, 30000 req/s based on region)

- As of today, you cannot request a quota increase for KMS

 

[ S3 Performance ]

1 UPLOAD

1) Multi-part upload

- recommended for files > 100MB

- must use for files > 5GB

- Can help parallelize uploads (speed up transfers)

 

2) S3 Transfer Acceleration (upload only)

- Increase transfer speed by transferring file to an AWS edge location which will forward the data to the S3 bucket in the target region

- Compatible with multi-part upload

2 DOWNLOAD :

1) S3 Byte-Range Fetches

- Paralleize GETs by requesting specific byte ranges

- Better resilience in case of failures

- Can be used to speed up downloads

- Can be used to retrieve only partial data (for example the head of a file)

 

[ S3 Select & Glacier Seletct ]

S3 서버사이드 필터링으로 고성능

- Retrieve less data using SQL by performing server side filtering

- Can filter by rows & columns (simple SQL statements)

- Less network transfer, less CPU cost client-side

 

[ S3 Event Notifications ]

S3 이벤트 발생시 SNS, SQS, Lambda function 등의 노티를 받을 수 있음

bucket versioning 을 활성화 시켜야 함

- ObjectCreated, ObjectRemoved, ObjectRestore, Replication...

- Object name filtering possible (ex: *.jpg)

  ex: generate thumbnails of images uploaded to S3

- Can create as many "S3 events" as desired

- can email/notification, add message into a queue, call Lambda Functions to generate some custom code

 

- S3 event notifications typically deliver events in seconds but can sometimes take a minute or longer

- If two writes are made to a single non-versioned object at the same time, it is possible that only a single event notification will be sent

- If you want to ensure that an event notification is sent for every successful write, you should enable versioning on you bucket

 

[ AWS Athena ]

S3 Bucket 에 file 을 두고 sql 로 직접 조회/분석이 가능

Serverless service to perform analytics directly against S3 files

- Uses SQL language to query the files

- Has a JDBC/ODBC driver

- Charged per query and amount of data scanned

- Supports CSV, JSON, ORC, Avro, and Parquet (built on Presto)

  Use cases: Business intelligence/analytics/reporting, analyze & query VPC Flow Logs, ELB Logs, CloudTrail trails...

* to Analyze data directly on S3, use Athena

 

[ S3 Object Lock & Glacier Vault Lock ]

1) S3 Object Lock : 정해진 시간동안 LOCK

Adopt a WORM (Write Once Read Many) model

Block an object version deletion for a specified amount of time

2) Glacier Vault Lock : 한번 설정시 파일 수정/삭제 절대불가

Adopt a WORM model

Lock the policy for future edits (can no longer be changed)

Helpful for compliance and data retention(보유)

 

 

 

 

반응형

[ S3 Storage Classes ]

1. Amazon S3 Standard (General Purpose)

 - High durability of objects across multiple AZs

 - Sustain 2 concurrent facility failures

 - eg. Big Data analytics, mobile&gaming applications, content distribution

 

2. Amazon S3 Intelligent Tiering

저지연, 고성능, 모니터링 비용 부과

 - High durability of objects across multiple AZs

 - Same low latency and high throughput performance of S3 Standard

 - Small monthly monitoring and auto-tiering fee

 - Automatically moves objects between two access tiers based on changing access patterns

 - Resilient against events that impact an entire AZ

 

3. Amazon S3 Standard-IA (Infrequent Access)

자주 접근하지 않는 데이터에 적합, 고성능, 저렴

 - High durability of objects across multiple AZs

 - Suitable for data that is less frequently accessed, but requires rapid access when needed

 - Low cost compared to Amazon S3 Standard

 - Sustain 2 concurrent facility failures

 - eg. As a data store for DR, backups

 

4. Amazon S3 One Zone-IA (Infrequent Access)

Standard-IA 보다 고성능, 더 저렴, 단일 AZ로 DR 불가

 - Same as IA but data is stored in a single AZ

 - data lost when AZ is destroyed

 - Low latency and high throughput performance

 - Supports SSL for data at transit and encryption at rest

 - Low cost compared to IA (20%)

 - eg. Storing secondary backup copies of on-premise data, or storing data you can recreate (thumnail)

 

5. Amazon Glacier

저장직후 단시간동안 접근이 불가하며 오래동안 데이터를 보관하기 용이, 저렴, 마그네틱 저장소의 대체재

 - Low cost object storage meant for archiving/backup

 - Data is retained for the longer term

 - Alternative on-premise magnetic tape storage

 - Cost per storage per month + retrieval cost

 - Each item in Glacier is called "Archive" (not object)

 - Archives are stored in "Vaults" (not bucket)

 - 3 retrieval options :

   1) Expedited (1 to 5 minutes)

   2) Standard (3 to 5 hours)

   3) Bulk (5 to 12 hours)

   * Minimum storage duration of 90 days

 

6. Amazon Glacier Deep Archive

Amazon Glacier 보다 더 저렴, 저장직후 더오랜 시간 접근이 불가

 - Amazon Glacier Deep Archive - for long term storage - cheaper :

  1) Standard (12hours)

  2) Bulk (48hours)

  * Minimum storage duration of 180 days

 

7. Amazon S3 Reduced Redundancy Storage (deprecated/omitted)

 

 

# Moving between storage classes

- You can transition objects between storage classes

- For infrequently accessed object, move them to STANDARD_IA

- For archive objects you don't need in real-time, GLACIER or DEEP_ARCHIVE

- Moving objects can be automated using a lifecycle configuration

 

[ S3 Lifecycle Rules ]

- Transition actions : It defines when objects are transitioned to another storage class

  1) Move objects to Standard IA class 60 days after creation

  2) Move to Clacier for archiving after 6 months

- Expiration actions: configure objects to expire (delete) after some time

  1) Access log files can be set to delete after a 365 days

  2) Can be used to delete old versions of files (if versioning is enabled)

  3) Can be used to delete incomplete multi-part uploads

- Rules can be created for a certain prefix (ex: s3://mybucket/mp3/*)

- Rules can be created for certain objects tags (ex: Department: Finance)

 

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 10-1. AWS CloudFront  (0) 2021.04.11
[AWS] 9-4. S3 Performance  (0) 2021.04.10
[AWS] 9-2. S3 Access Logs, S3 Replication  (0) 2021.04.04
[AWS] 9-1. S3 MFA Delete  (0) 2021.04.03
[AWS] 8. AWS CLI : configuration  (0) 2021.04.01

[ S3 Access Logs ]

- For audit(품질검사) purpose, you may want to log all access to S3 buckets

- Any request made to S3, from any account, authorized or denied, will be logged into another S3 bucket

- That data can be analyzed using data analysis tools

- Or Amazon Athena as we'll see later in this section

 

# Warning

로깅 버킷을 로깅 모니터링 대상으로 두면 로깅 루프가 되어 버킷 사이즈가 기하급수적으로 커짐

* Do not set your logging bucket to be the monitored bucket

  It will create logging loop, and your bucket will grow in size exponentially(기하급수적으로)

 

 

[ S3 Replication ]

- Must enable versioning in source and destination

- Cross Region Replication (CRR)

- Same Region Replication (SRR)

- Buckets can be in different accounts

- Copying is asynchronous

- Must give proper IAM permissions to S3

- CRR Use cases : compliance, lower latency access, replication across accounts

- SRR Use cases : log aggregation, live replication between production and test accounts

 

- After activating, only new objects are replicated (not retroactive)

- For Delete operations :  any delete operation is not replicated

  If you delete without a version ID, it adds a delete marker, not replicated

  If you delete with a version ID, it deletes in the source, not replicated

- There is no "chaining" of replication

  If bucket I has replication into bucket 2, wich has replication into bucket 3

  Then objects created in bucket 1 are not replicated to bucket 3

 

 

[ S3 Pre-signed URLs ]

- Can generate pre-signed URLs using SDK or CLI

  for downloads (easy, can use the CLI)

  for uploads (harder, must use the SDK)

- Valid for a default of 3600 seconds, can change timeout with --expires-in [TIME_BY_SECONDS] argument

- Users given a pre-signed URL inherit the permissions of the person who generated the URL for GET/PUT

Examples :

  1) Allow only logged-in users to download a premium video on your S3 bucket

  2) Allow an ever changing list of users to download files by generating URLs dynamically

  3) Allow temporarily a user to upload a file to a precies location in our bucket

 

 

 

 

 

 

 

반응형

[ S3 MFA-DELETE ]

bucket 파일 삭제를 보호하기 위한 MFA (QR 코드 인증 등 2차 인증) 사용

MFA delete 는 CLI 에서만 설정이 가능

MFA delete 설정시 파일 영구 삭제 MFA 인증이 있어야 가능

일반적인 삭제는 가능하나 삭제된 이력을 삭제(영구삭제)할 수 없음

- MFA (multi factor authentication) forces user to generate a code on a device (usually a mobile phone or hardware) before doing important operations on S3

- To use MFA-Delete, enable versioning on the S3 bucket

- You will need MFA to permanently delete an object version, suspend versioning on the bucket

- You won't need MFA for enabling versioning listing deleted versions

- Only the bucket owner (root account) can enable/disable MFA-DELETE

- MFA-Delete currently can only be enabled using the CLI

 

 

반응형

[ AWS CLI Configuration ] 

properly configure the CLI

1. Bad way

User 의 security credential 정보 (access key id/secret access key)를 사용하여(aws configure 명령어를 통해) EC2 에 인증 및 사용하는 방법은 보안에 취약하므로 로컬 및 사내망이 아닌 경우 지양

- We could run 'aws configure' on EC2.

- This ways is super insecure, never put your personal credentials on an EC2

- your personal credentials are personal and only belong on your personal computer

- If the EC2 is compromised, so is your personal account

- If the EC2 is shared, other people may perform AWS actions while impersonating you

 

> aws configure

> user 의 access key id 입력

> user 의 secret access key 입력

> region name 입력

> cat ~/.aws/credentials 로 로그인한 계정의 정보(access key id/secret access key)를 열람 할 수 있음 (보안에 취약)

 

2. Right way

IAM Role 과 policy를 설정하여 EC2 인스턴스에 인증하는 방식을 사용

- IAM Roles can be attached to EC2 instances

- IAM Roles can come with a policy authorizing exactly what the EC2 instance should be able to do

- EC2 Instances can the use these profiles automatically without any additional configurations

 

* JSON generator(설정권한 등을 UI로 확인 및 선택 가능) 를 사용하여 IAM JSON 을 쉽게 생성 할 수 있음

* Simulator 를 사용하여 설정한 IAM Role/policy에 대한 테스트가 가능

 

[ AWS EC2 Instance Metadata ]

CLI 에서 curl http://169.254.169.254/latest/meta-data 을 통해 메타데이터 정보를 가져올 수 있음

- AWS EC2 Instance Metadata is powerful but one of the least known features to developers

- It allows AWS EC2 instances to "learn about themselves" without using an IAM Role for the purpose

- The URL is http://169.254.169.254/latest/meta-data

- You can retrieve the IAM Role name from the metadata, but you cannot retrieve the IAM Policy

  Metadata = Info about the EC2 instance

  Userdata = launch script of the EC2 instance

ex) 1. curl http://169.254.169.254/latest/meta-data/hostname

     2. curl http://169.254.169.254/latest/meta-data/iam/security-credentials/{EC2RoleName}

 

 

[ AWS SDK ]

- What if you want to perform actions on AWS directly from your applications code? (without using CLI)

- You can use an SDK (software development kit)

- Official SDKs are Java/.NET/Node.js/PHP/Python etc.

- We have to use the AWS SDK when coding against AWS Services such as DynamoDB

- AWS CLI uses the Python SDK(boto3)

* If you don't specify or configure a default region, then us-east-1 will be chosen by deafult

 

- It's recommend to use the default credential provider chain

- The default credential provider chain works seamlessly with:

  AWS credentials at ~/.aws/credentials (only on our computers or on premise)

  Instance Profile Credentials using IAM Roles (for EC2 machines, etc..)

  Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

- Overall, Never Ever Store AWS Credentials in your code.

 

# Exponential Backoff

- Any API that fails because of too many calls needs to be retried with Exponential Backoff

- These apply to rate limited API

- Retry mechanism included in SDK API calls

 

 

반응형

[ S3 Websites ]

- S3 can host static websites and have them accessible on the www

- The website URL will be :

  {bucket-name}.s3-website-{AWS-region}.amazonaws.com

  OR

  {bucket-name}.s3-website.{AWS-region}.amazonaws.com

- If you get a 403 (forbidden) error, make sure the bucket policy allows public reads

 

[ # CORS ]

- An origin is a scheme (protocol), host (domain) and port

- CORS means Cross-Origin Resource Sharing

- Web Browser based mechanism to allow requests to other origins while visiting the main origin

   Same origin : http://example.com/app1 & http://example.com/app2

   Different origins : http://www.example.com & http://other.example.com 

- The requests won't be fulfilled unless the other origin allows for the requests using CORS Headers(Access-Control-Allow-Origin)

 

[ S3 CORS *** ]

- If a client does a cross-origin request on our S3 bucket, we need to enable the correct CORS headers

- You can allow for a specific origin or for * (all origins)

 

[ Amazon S3 - Consistency Model ]

- Read after write consistency for PUTS of new objects

  1) As soon as a new object is written, we can retrieve it (ex: PUT 200 => GET 200)

  2) If we did a GET before to see if the object existed (ex: GET 404 => PUT 200 => GET 404) - eventually consistent

Eventual Consistency for DELETES and PUTS of existing objects

  1) If we read an object after updating, we might get the older version (ex: PUT 200 => PUT 200 => GET 200 (might be older version))

  2) If we delete an object, we might still be able to retrieve it for a short time (ex: DELETE 200 => GET 200)

* there's no way to request "strong consistency"

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 9-1. S3 MFA Delete  (0) 2021.04.03
[AWS] 8. AWS CLI : configuration  (0) 2021.04.01
[AWS] 7-2. S3 Security  (0) 2021.03.29
[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29

[ S3 Security ]

1) User based

- IAM policies - which API calls should be allowed for a specific user from IAM console

2) Resource Based

- Bucket Policies - bucket wide rules from the S3 console - allows cross account

- Object Access Control List (ACL) - finer grain

- Bucket Access Control List (ACL) - less common

 

* an IAM principal can access an S3 object if the user IAM permissions allow it OR the resource policy ALLOWS it

* AND there's no explicit DENY

 

 

[ S3 Bucket Policies ]

- JSON based policies

  Resources : buckets and objects

  Actions : Set of API to Allow or Deny

  Effect : Allow / Deny

  Principal : The account or user to apply the policy to

- Use S3 bucket for policy to :

  Grant public access to the bucket

  Force objects to be encrypted at upload

 

[ # Hands-on : Bucket Policies ]

Policy generator 사용

1) Policy Type 선택 : S3 Bucket Policy

2) Add statements

첫번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : Null

Key : s3:x-amz-server-side-encryption

value : true

 

2) Add statements

두번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : StringNotEquals

Key : s3:x-amz-server-side-encryption

value : AES256

 

4) Generate Policy 클릭시 JSON 생성됨

5) JSON copy&paste to Bucket policy 

 

* 위와같이 설정시 Object (file) 을 Encryption 설정(SSE-S3) 없이 업로드 할 경우, Access Denied 로 업로드 실패.

Policy generator 사용

1) Policy Type 선택 : S3 Bucket Policy

2) Add statements

첫번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : Null

Key : s3:x-amz-server-side-encryption

value : true

 

2) Add statements

두번째 statement 설정

Effect : Deny선택

Principal : * (anywhere)

Actions : Put Objects 선택

Amazon Resource Name (ARN) : ARN/* 입력 (S3 management console 에서 ARN(bucket name) 확인 가능)

3) Add Conditions

Condition : StringNotEquals

Key : s3:x-amz-server-side-encryption

value : AES256

 

4) Generate Policy 클릭시 JSON 생성됨

5) JSON copy&paste to Bucket policy 

 

* 위와같이 설정시 Object (file) 을 Encryption 설정(SSE-S3) 없이 업로드 할 경우, Access Denied 로 업로드 실패.

 

[ Bucket settings for Block Public Access ]

- Block public access to buckets and objects granted through

  1) new access control lists (ACLs)

  2) any access control lists (ACLs)

  3) new public bucket or access point policies

account settings for Block Public Access 설정/Block public access 설정을 통해 모든 public의 bucket 접근 차단 가능

- Block public and cross-account access to buckets and objects through any public bucket or access point policies

* These settings were created to prevent company data leaks

- If you know your bucket should never be public, leave these on

- Can be set at the account level

 

 

[ S3 Security - Other ]

1) Networking :

  - Supports VPC Endpoints (for instances in VPC without www internet)

2) Logging and Audit :

  - S3 Access Logs can be stored in other S3 bucket

  - API calls can be logged in AWS CloudTrail

3) User Security :

  - MFA Delete : MFA (multi factor authentication) can be required in versioned buckets to delete objects

  - Pre-Signed URLs : URLs that are valid only for a limited time (ex: premium video service for logged in users)

 

반응형

'infra & cloud > AWS' 카테고리의 다른 글

[AWS] 8. AWS CLI : configuration  (0) 2021.04.01
[AWS] 7-3. S3 Websites : CORS, Eventual Consistency, Strong Consistency  (0) 2021.04.01
[AWS] 7-1. Amazon S3, S3 Encryption  (0) 2021.03.29
[AWS] 6. Beanstalk  (0) 2021.03.29
[AWS] 5-1. Route 53  (0) 2021.03.24

+ Recent posts