Metadata-Version: 2.1
Name: aws-cdk.aws-s3
Version: 1.36.0
Summary: CDK Constructs for AWS S3
Home-page: https://github.com/aws/aws-cdk
Author: Amazon Web Services
License: Apache-2.0
Project-URL: Source, https://github.com/aws/aws-cdk.git
Description: ## Amazon S3 Construct Library
        
        <!--BEGIN STABILITY BANNER-->---
        
        
        ![cfn-resources: Stable](https://img.shields.io/badge/cfn--resources-stable-success.svg?style=for-the-badge)
        
        ![cdk-constructs: Stable](https://img.shields.io/badge/cdk--constructs-stable-success.svg?style=for-the-badge)
        
        ---
        <!--END STABILITY BANNER-->
        
        Define an unencrypted S3 bucket.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        Bucket(self, "MyFirstBucket")
        ```
        
        `Bucket` constructs expose the following deploy-time attributes:
        
        * `bucketArn` - the ARN of the bucket (i.e. `arn:aws:s3:::bucket_name`)
        * `bucketName` - the name of the bucket (i.e. `bucket_name`)
        * `bucketWebsiteUrl` - the Website URL of the bucket (i.e.
          `http://bucket_name.s3-website-us-west-1.amazonaws.com`)
        * `bucketDomainName` - the URL of the bucket (i.e. `bucket_name.s3.amazonaws.com`)
        * `bucketDualStackDomainName` - the dual-stack URL of the bucket (i.e.
          `bucket_name.s3.dualstack.eu-west-1.amazonaws.com`)
        * `bucketRegionalDomainName` - the regional URL of the bucket (i.e.
          `bucket_name.s3.eu-west-1.amazonaws.com`)
        * `arnForObjects(pattern)` - the ARN of an object or objects within the bucket (i.e.
          `arn:aws:s3:::bucket_name/exampleobject.png` or
          `arn:aws:s3:::bucket_name/Development/*`)
        * `urlForObject(key)` - the URL of an object within the bucket (i.e.
          `https://s3.cn-north-1.amazonaws.com.cn/china-bucket/mykey`)
        
        ### Encryption
        
        Define a KMS-encrypted bucket:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket = Bucket(self, "MyUnencryptedBucket",
            encryption=BucketEncryption.KMS
        )
        
        # you can access the encryption key:
        assert(bucket.encryption_key instanceof kms.Key)
        ```
        
        You can also supply your own key:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        my_kms_key = kms.Key(self, "MyKey")
        
        bucket = Bucket(self, "MyEncryptedBucket",
            encryption=BucketEncryption.KMS,
            encryption_key=my_kms_key
        )
        
        assert(bucket.encryption_key === my_kms_key)
        ```
        
        Use `BucketEncryption.ManagedKms` to use the S3 master KMS key:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket = Bucket(self, "Buck",
            encryption=BucketEncryption.KMS_MANAGED
        )
        
        assert(bucket.encryption_key == null)
        ```
        
        ### Permissions
        
        A bucket policy will be automatically created for the bucket upon the first call to
        `addToResourcePolicy(statement)`:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket = Bucket(self, "MyBucket")
        bucket.add_to_resource_policy(iam.PolicyStatement(
            actions=["s3:GetObject"],
            resources=[bucket.arn_for_objects("file.txt")],
            principals=[iam.AccountRootPrincipal()]
        ))
        ```
        
        Most of the time, you won't have to manipulate the bucket policy directly.
        Instead, buckets have "grant" methods called to give prepackaged sets of permissions
        to other resources. For example:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        lambda = lambda.Function(self, "Lambda")
        
        bucket = Bucket(self, "MyBucket")
        bucket.grant_read_write(lambda)
        ```
        
        Will give the Lambda's execution role permissions to read and write
        from the bucket.
        
        ### Sharing buckets between stacks
        
        To use a bucket in a different stack in the same CDK application, pass the object to the other stack:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        #
        # Stack that defines the bucket
        #
        class Producer(cdk.Stack):
        
            def __init__(self, scope, id, *, description=None, env=None, stackName=None, tags=None):
                super().__init__(scope, id, description=description, env=env, stackName=stackName, tags=tags)
        
                bucket = s3.Bucket(self, "MyBucket",
                    removal_policy=cdk.RemovalPolicy.DESTROY
                )
                self.my_bucket = bucket
        
        #
        # Stack that consumes the bucket
        #
        class Consumer(cdk.Stack):
            def __init__(self, scope, id, *, userBucket, description=None, env=None, stackName=None, tags=None):
                super().__init__(scope, id, userBucket=userBucket, description=description, env=env, stackName=stackName, tags=tags)
        
                user = iam.User(self, "MyUser")
                user_bucket.grant_read_write(user)
        
        producer = Producer(app, "ProducerStack")
        Consumer(app, "ConsumerStack", user_bucket=producer.my_bucket)
        ```
        
        ### Importing existing buckets
        
        To import an existing bucket into your CDK application, use the `Bucket.fromBucketAttributes`
        factory method. This method accepts `BucketAttributes` which describes the properties of an already
        existing bucket:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket = Bucket.from_bucket_attributes(self, "ImportedBucket",
            bucket_arn="arn:aws:s3:::my-bucket"
        )
        
        # now you can just call methods on the bucket
        bucket.grant_read_write(user)
        ```
        
        Alternatively, short-hand factories are available as `Bucket.fromBucketName` and
        `Bucket.fromBucketArn`, which will derive all bucket attributes from the bucket
        name or ARN respectively:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        by_name = Bucket.from_bucket_name(self, "BucketByName", "my-bucket")
        by_arn = Bucket.from_bucket_arn(self, "BucketByArn", "arn:aws:s3:::my-bucket")
        ```
        
        ### Bucket Notifications
        
        The Amazon S3 notification feature enables you to receive notifications when
        certain events happen in your bucket as described under [S3 Bucket
        Notifications](https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html) of the S3 Developer Guide.
        
        To subscribe for bucket notifications, use the `bucket.addEventNotification` method. The
        `bucket.addObjectCreatedNotification` and `bucket.addObjectRemovedNotification` can also be used for
        these common use cases.
        
        The following example will subscribe an SNS topic to be notified of all `s3:ObjectCreated:*` events:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        import aws_cdk.aws_s3_notifications as s3n
        
        my_topic = sns.Topic(self, "MyTopic")
        bucket.add_event_notification(s3.EventType.OBJECT_CREATED, s3n.SnsDestination(topic))
        ```
        
        This call will also ensure that the topic policy can accept notifications for
        this specific bucket.
        
        Supported S3 notification targets are exposed by the `@aws-cdk/aws-s3-notifications` package.
        
        It is also possible to specify S3 object key filters when subscribing. The
        following example will notify `myQueue` when objects prefixed with `foo/` and
        have the `.jpg` suffix are removed from the bucket.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket.add_event_notification(s3.EventType.OBJECT_REMOVED,
            s3n.SqsDestination(my_queue), prefix="foo/", suffix=".jpg")
        ```
        
        ### Block Public Access
        
        Use `blockPublicAccess` to specify [block public access settings](https://docs.aws.amazon.com/AmazonS3/latest/dev/access-control-block-public-access.html) on the bucket.
        
        Enable all block public access settings:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket = Bucket(self, "MyBlockedBucket",
            block_public_access=BlockPublicAccess.BLOCK_ALL
        )
        ```
        
        Block and ignore public ACLs:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket = Bucket(self, "MyBlockedBucket",
            block_public_access=BlockPublicAccess.BLOCK_ACLS
        )
        ```
        
        Alternatively, specify the settings manually:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket = Bucket(self, "MyBlockedBucket",
            block_public_access=BlockPublicAccess(block_public_policy=True)
        )
        ```
        
        When `blockPublicPolicy` is set to `true`, `grantPublicRead()` throws an error.
        
        ### Logging configuration
        
        Use `serverAccessLogsBucket` to describe where server access logs are to be stored.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        access_logs_bucket = Bucket(self, "AccessLogsBucket")
        
        bucket = Bucket(self, "MyBucket",
            server_access_logs_bucket=access_logs_bucket
        )
        ```
        
        It's also possible to specify a prefix for Amazon S3 to assign to all log object keys.
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket = Bucket(self, "MyBucket",
            server_access_logs_bucket=access_logs_bucket,
            server_access_logs_prefix="logs"
        )
        ```
        
        ### Website redirection
        
        You can use the two following properties to specify the bucket [redirection policy](https://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html#advanced-conditional-redirects). Please note that these methods cannot both be applied to the same bucket.
        
        #### Static redirection
        
        You can statically redirect a to a given Bucket URL or any other host name with `websiteRedirect`:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket = Bucket(self, "MyRedirectedBucket",
            website_redirect={"host_name": "www.example.com"}
        )
        ```
        
        #### Routing rules
        
        Alternatively, you can also define multiple `websiteRoutingRules`, to define complex, conditional redirections:
        
        ```python
        # Example automatically generated without compilation. See https://github.com/aws/jsii/issues/826
        bucket = Bucket(self, "MyRedirectedBucket",
            website_routing_rules=[{
                "host_name": "www.example.com",
                "http_redirect_code": "302",
                "protocol": RedirectProtocol.HTTPS,
                "replace_key": ReplaceKey.prefix_with("test/"),
                "condition": {
                    "http_error_code_returned_equals": "200",
                    "key_prefix_equals": "prefix"
                }
            }]
        )
        ```
        
        ### Filling the bucket as part of deployment
        
        To put files into a bucket as part of a deployment (for example, to host a
        website), see the `@aws-cdk/aws-s3-deployment` package, which provides a
        resource that can do just that.
        
Platform: UNKNOWN
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: JavaScript
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Typing :: Typed
Classifier: Development Status :: 5 - Production/Stable
Classifier: License :: OSI Approved
Requires-Python: >=3.6
Description-Content-Type: text/markdown
