Amazon S3 multiple subdomains and private files

Hi,

I’ve got multiple subdomains and for each subdomain I would like to store some files on amazon S3 and all these files are private so can only be downloaded after user logs in. Also, some of the files are for a specific user others are for multiple users. Is it possibile with Amazon S3? and what is the best way to achieve this?

Many thanks

What you can do is keep all files in the S3 buckets private, and once you’ve decided in the application that someone is allowed to access one of those files, generate a presigned URL for them (see https://docs.aws.amazon.com/AmazonS3/latest/userguide/ShareObjectPreSignedURL.html).
The user can then visit the URL for a limited amount of time (which you can specify when creating the signed URL), after which the URL will no longer work.
There is of course still the chance that the user will share such a URL with someone else, but the change gets smaller as you make the time that the link is valid smaller.

Hi @rpkamp many thanks for your reply. I would like to ask you some clarification:

I suppose i need to assign the file details (not sure what details I will then need to identify the file on amazon S3 for the future download) to a specific user in mysql database correct? Then check if that file is assigned to the user and then generate the presigned URL.

Also avoid confusion between different subdomain I would like to upload different files for each subdomain into subdomain folders inside the same bucket will it be possible? How do I then identify where the file is stored on Amazon s3 bucket when I need to create the presigned URL?

Many thanks

Yes. You’d need to store the connection between a user and which files they are allowed to access in the database.

S3 supports creating directories as well as files, so you can create any structure you like and save the file path in the database. You can that use that path to ask the S3 API to generate a presigned URL.

As for the subdomains, that could be the first level of folders.

So a path might look something like

foo.example.com/subfolder/subsubfolder/Lorem.pdf

You can create a URI with s3 as the schema and the bucket name as the “host” if that is easier. At least then there won’t be any confusion as to which bucket the file is in.

That would look something like

s3://my-bucket/foo.example.com/subfolder/subsubfolder/Lorem.pdf

You can then use parse_url in PHP to get the separate parts (bucket name, path) out.

1 Like

Yes I was thinking about creating several buckets to match with each single subdomain but I believe I read somewere that there is a limit on how many buckets can be created. I might be wrong though

Checking the code on aws-sdk here https://github.com/awsdocs/aws-doc-sdk-examples/blob/main/php/example_code/s3/PresignedURL.php

I can see this code:

//Creating a presigned URL
$cmd = $s3Client->getCommand('GetObject', [
    'Bucket' => 'my-bucket',
    'Key' => 'testKey'
]);

it passes the bucket name (not sure if I can pass the bucket subdirectory for example my-bucket/subdomain)? The key I suppose will be the same for each subdomain as it is refered to the bucket and not the folder insed it correct?

The Bucket is just the bucket name. No folders here.

The Key is the path of the file within the given bucket (including any directories).

1 Like

Perfect sorry very silly I thought the key was somthing else :frowning: I should heve read documentation. Many thanks for your help :wink:

1 Like

No worries. The name Key is quite confusing. It’s also a term often used in cryptography, which makes it even more confusing combined with presigned URLs.

Oh well.

An advantage of using aws cognito for authentication is that cookies can be used in place of signed urls and requests. This works well in scenarios where none data documents like pdfs need to be served directly from a bucket efficiently while also being protected by iam roles and policies.

Furthermore, zero trust pre signed urls can be generated directly in the browser. This is the most secure method to grant users access to private content. The pre signed url generated in the browser is provides limited access to resources by federating auth through aws cognito and mapping users to a role based on user attributes.

In my own applications every outbound request to aws is newly signed prior to being dispatched to any service rest api. This offers optimal zero trust security model.

I’ve setup a private video archive in s3 and cloudfront. I found the easiest way to grant users access to that was cookies instead since I was using aws cognito for authentication.

How would this integrate with an existing PHP application?

Cognito federated identity pools provide several different adaptors like saml and oauth.

https://docs.aws.amazon.com/cognito/latest/developerguide/external-identity-providers.html

It also appears the php api is similar to the JavaScript one providing signing capabilities.

https://docs.aws.amazon.com/aws-sdk-php/v3/api/class-Aws.Signature.SignatureV4.html#_signRequest

Something like this would need to be converted to use php and the php sdk v3 instead of the JavaScript sdk v3.

interface CreateSignHttpRequestParams {
  body?: string;
  headers?: Record<string, string>;
  hostname: string;
  method?: string;
  path?: string;
  port?: number;
  protocol?: string;
  query?: Record<string, string>;
  service: string;
  cognitoSettings: CognitoSettings,
  authFacade: AuthFacade
}

const createS3SignedHttpRequest = ({
  body,
  headers,
  hostname,
  method = "GET",
  path = "/",
  port = 443,
  protocol = "https:",
  query,
  service,
  cognitoSettings,
  authFacade
}: CreateSignHttpRequestParams): Observable<HttpRequest> => of(
  new HttpRequest({
    body,
    headers,
    hostname,
    method,
    path,
    port,
    protocol,
    query,
  }
)).pipe(
  tap(() => console.log('.marker({ event: BEGIN , context: s3, entity: sig , op: signv4 , meta: {  } })')),
  switchMap(req => from(
    (new SignatureV4(
      {
        credentials: fromCognitoIdentityPool({
          client: new CognitoIdentityClient({ region: cognitoSettings.region }),
          identityPoolId: cognitoSettings.identityPoolId,
          logins: {
            [`cognito-idp.${cognitoSettings.region}.amazonaws.com/${cognitoSettings.userPoolId}`]: () => firstValueFrom(authFacade.getUser$.pipe(map(u => u ? u.id_token : undefined)))
          }
        }),
        region: cognitoSettings.region,
        service,
        sha256: Sha256,
      }
    )).sign(req)
      .then(
        signedReq => {
          console.log('.marker({ event: RESOLVED, entity: s3 , op: signv4 , meta: {  } })');
          return signedReq;
        }
      )
  ).pipe(
    tap(() => console.log('.marker({ /s3/sign/after/sig })')),
    take(1)
  )),
  map(req => req as HttpRequest),
  tap(() => console.log('.marker({ event: END , context: s3, entity: sig , op: signv4 , meta: {  } })')),
);

This line creates temporary federated identity pool credentials that are then used to sign the request. The logins literal contains the current users auth token which is exchanged for temp credentials to the identity pool.

fromCognitoIdentityPool({
          client: new CognitoIdentityClient({ region: cognitoSettings.region }),
          identityPoolId: cognitoSettings.identityPoolId,
          logins: {
            [`cognito-idp.${cognitoSettings.region}.amazonaws.com/${cognitoSettings.userPoolId}`]: () => firstValueFrom(authFacade.getUser$.pipe(map(u => u ? u.id_token : undefined)))
          }
        }),
        region: cognitoSettings.region,
        service,
        sha256: Sha256,
      }

The same thing can be achieved in php because the end result it merely a signed http request that is dispatched to the aws rest api. That can be done using url or whatever other php lib for http request being used.

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.