How to create a free CDN, Content Delivery Network account

Background

Web page checking tools continually remind me that using CDN Content Delivery Networks would reduce the distance travelled and time taken for files to reach the user viewing the web page.

I’ve made a couple of unsuccessful attempts at the lengthy and involved process of creating free accounts, became distracted and had to continue with more pressing projects.

I would be grateful to any CDN users who could simplify the process and look forward to testing the new web page performances.

I have a couple of sites that run on Cloudflare. I don’t remember what was involved in setting them up, other than changing the DNS settings to point to Cloudflare’s NSs rather than mine, but if I can do it, anyone can! :shifty:

I’m not a fan of CloudFlare I think from memory that complications arose with dynamic sites and their caching system. CloudFlare works fine on static sites and I suppose I could try setting up a “placeholder” site to store the CSS and image files and link to my other sites.

I would prefer using the free 1GB CDN on Amazon or Google but as mentioned they are remarkably complicated and speciliase in paying customers.

Will all assets on the cdn be publicly accessible over the internet or would you prefer to have more fine grained control restricting assets to specific authenticated users?

All assets to be publicly available.

The assets are for personal sites, so I just need a link to the assets on the CDN.

  • AWS s3 provides 5TB of s3 storage for free.
  • AWS Cloudfront provides 5TB transfer out and 10 million requests per month free.

In addition, for new accounts many more free and discounted services are available for a year.

This being said the complexity of setting up a cdn on aws is because there isn’t really a push button to create a cdn. Creating a cdn on aws requires combining two services. Those services are s3 and Cloudfront.

https://aws.amazon.com/s3/
https://aws.amazon.com/cloudfront/

s3 is an elastic object storage file system. The top level of each storage lets call it a drive is a bucket. A bucket can be made publicly available over the internet but that is not recommended. Instead cloudffront can be used to setup a distribution that exposes an s3 bucket or folder (prefix) within that bucket to the public inernet.

I think this video will explain doing this in detail. It sounds like it does.

https://aws.amazon.com/cloudfront/getting-started/S3/

I asked about access control because aws actually has some very powerful and flexible access control features. Using what is called federated identities read, write access can be restricted to objects and/or prefixes in s3. This requires using zero-trust signed URLs or signed cookies to authenticate and identify users. That said you can federate users through aws using a provider adaptor and some of the default adaptors provided integrate directly with oauth and saml.

For my own application I’m actually not just storing compiled js, css, and html but data entities in natural form as json documents inside s3. For me this is a very cheap, highly available, scalable solution in comparison to a bulky relational database or even managed nosql db like mongo. All get by id (uuid) requests in the application fetch json flat files from the cdn rather than a physical database.

Lastly s3 objects can be used with Athena. In Athena you can build tables like in MySQL from objects and their contents stored in s3. Once the tables are defined and populated it can be queried using SQL.

https://aws.amazon.com/athena

I haven’t programmed in php in half a decade but when I was working on a php project way back when used flysystem to easily read and write files to s3.

https://flysystem.thephpleague.com/v2/docs/

1 Like

Wow, many thanks, this is far more than I expected and well worth pursuing.

The supplied information is very informative and isolates the required features from the immense documentation.

I look forward to being on the desktop to try your recommendations

I eventually managed to create an account which was not easy. Their validation images are difficult to read and it took several attempts. Then came the telephone SMS validation which is supposed to take up to ten minutes. Three attempts followed by a Support Ticket which wasn’t immediately answered and eventually after at least an hour three SMS notifications arrived!

S3 Bucket created and uploaded half a dozen images and now trying to fathom out how to create the cross-origin link permissions.

I’ve had enough for today and hope to render the images on my webpage tomorrow.

1 Like

I don’t have mfa set up for my accounts. The validation images are frustrating. It’s surprising that hasn’t been fixed yet. However, I have the same experience every time I need to login.

1 Like

Eventually managed to create the Amazon S3 CDN images. It was a struggle :frowning:

Buckets containing the files are all inaccessible and permissions require the following JSON Script:

{"Version": "2008-10-17",
"Statement": [{"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MY_BUCKET_NAME/*"
}]}

I had this idea about creating file structure mirror images but was unable to open any images if they are in a Bucket sub-directory. I searched and the advice was convoluted and I will have to reread with a clear head in order to understand the solutions.

Anyway I now have another tool in my toolbox will experiment to find the best usage.

Here we go - my very first Amazon S3 Image web-page showing just my avatars :slight_smile:

If you want to host images, Cloudinary is a great way to go. They can automatically perform smart image resizing, cropping and conversion (favoring WebP if it is supported by the browser) while delivering images via a high speed CDN.

They have a very generous free tier — I don’t pay anything for my personal site and no credit card is required. If you’re not super invested in AWS, they might make a good alternative. You can also have sub-directories.

3 Likes

I was actually using cloudinary for a while. I ultimately made the decision to move to s3 because it would be much less expensive long term and didn’t require many of added media manipulation features. At the time I didn’t have a great understanding of fine grained access control which is now a huge benefit of aws. I also have a creative cloud subscription and it has been on my mind if that could used as a public facing cdn as well. That said s3 and aws is working out well for my side project purposes. Much of that knowledge is directly connected to my professional work as I’m able to speak and provide actually detailed solutions for cloud migrations even bypassing custom rest apis and communicating directly with services in the cloud like s3 in the browser.

This prototype demonstrates securely fetching objects from a s3 bucket directly in the browser using zero-trust signed http requests for calls to the rest api. There is a basic server-side proxy to circumvent cors but other than that http request is created and signed in the browser and sent directly to s3 rest api without a need for any custom middle layer.

https://uhf0kayrs4.execute-api.us-east-1.amazonaws.com/dev-test-virtual-list-flex-v1/character/1011334

This is the Typescript s3 implementation of that with the signHttpRequest method at the very bottom. Instead of using the s3 sdk the http request is manually created so it can be signed using that method.

This method can be applied for any JavaScript application. It can also be applied to any aws s3 service.

This is the request that hits aws s3 directly through a basic proxy which would be v4 signed.

https://uhf0kayrs4.execute-api.us-east-1.amazonaws.com/awproxy/s3/classifieds-ui-prod/panelpages/63a4219d-254e-11ec-ab14-c613312e594f.json

That file has a domain object “panel page” stored in natural form as a json s3 object.

The same thing is also being done for open search which is where all the application routes are being stored.

https://opensearch.org/

These two solutions combined effectively provide a feature rich, low cost, highly available, auto scaling, secure alternative to a traditional relational database or even something like mongo. Not to mention fine grained access control can be taken full advantage of using signed urls to communicate with each service through federated identities in aws cognito.

https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html

1 Like

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.