I have a couple of sites that run on Cloudflare. I don’t remember what was involved in setting them up, other than changing the DNS settings to point to Cloudflare’s NSs rather than mine, but if I can do it, anyone can!
I’m not a fan of CloudFlare I think from memory that complications arose with dynamic sites and their caching system. CloudFlare works fine on static sites and I suppose I could try setting up a “placeholder” site to store the CSS and image files and link to my other sites.
I would prefer using the free 1GB CDN on Amazon or Google but as mentioned they are remarkably complicated and speciliase in paying customers.
AWS Cloudfront provides 5TB transfer out and 10 million requests per month free.
In addition, for new accounts many more free and discounted services are available for a year.
This being said the complexity of setting up a cdn on aws is because there isn’t really a push button to create a cdn. Creating a cdn on aws requires combining two services. Those services are s3 and Cloudfront.
s3 is an elastic object storage file system. The top level of each storage lets call it a drive is a bucket. A bucket can be made publicly available over the internet but that is not recommended. Instead cloudffront can be used to setup a distribution that exposes an s3 bucket or folder (prefix) within that bucket to the public inernet.
I think this video will explain doing this in detail. It sounds like it does.
I asked about access control because aws actually has some very powerful and flexible access control features. Using what is called federated identities read, write access can be restricted to objects and/or prefixes in s3. This requires using zero-trust signed URLs or signed cookies to authenticate and identify users. That said you can federate users through aws using a provider adaptor and some of the default adaptors provided integrate directly with oauth and saml.
For my own application I’m actually not just storing compiled js, css, and html but data entities in natural form as json documents inside s3. For me this is a very cheap, highly available, scalable solution in comparison to a bulky relational database or even managed nosql db like mongo. All get by id (uuid) requests in the application fetch json flat files from the cdn rather than a physical database.
Lastly s3 objects can be used with Athena. In Athena you can build tables like in MySQL from objects and their contents stored in s3. Once the tables are defined and populated it can be queried using SQL.
I eventually managed to create an account which was not easy. Their validation images are difficult to read and it took several attempts. Then came the telephone SMS validation which is supposed to take up to ten minutes. Three attempts followed by a Support Ticket which wasn’t immediately answered and eventually after at least an hour three SMS notifications arrived!
S3 Bucket created and uploaded half a dozen images and now trying to fathom out how to create the cross-origin link permissions.
I’ve had enough for today and hope to render the images on my webpage tomorrow.
I had this idea about creating file structure mirror images but was unable to open any images if they are in a Bucket sub-directory. I searched and the advice was convoluted and I will have to reread with a clear head in order to understand the solutions.
Anyway I now have another tool in my toolbox will experiment to find the best usage.
If you want to host images, Cloudinary is a great way to go. They can automatically perform smart image resizing, cropping and conversion (favoring WebP if it is supported by the browser) while delivering images via a high speed CDN.
They have a very generous free tier — I don’t pay anything for my personal site and no credit card is required. If you’re not super invested in AWS, they might make a good alternative. You can also have sub-directories.
I was actually using cloudinary for a while. I ultimately made the decision to move to s3 because it would be much less expensive long term and didn’t require many of added media manipulation features. At the time I didn’t have a great understanding of fine grained access control which is now a huge benefit of aws. I also have a creative cloud subscription and it has been on my mind if that could used as a public facing cdn as well. That said s3 and aws is working out well for my side project purposes. Much of that knowledge is directly connected to my professional work as I’m able to speak and provide actually detailed solutions for cloud migrations even bypassing custom rest apis and communicating directly with services in the cloud like s3 in the browser.
This prototype demonstrates securely fetching objects from a s3 bucket directly in the browser using zero-trust signed http requests for calls to the rest api. There is a basic server-side proxy to circumvent cors but other than that http request is created and signed in the browser and sent directly to s3 rest api without a need for any custom middle layer.
This is the Typescript s3 implementation of that with the signHttpRequest method at the very bottom. Instead of using the s3 sdk the http request is manually created so it can be signed using that method.
This is the request that hits aws s3 directly through a basic proxy which would be v4 signed.
These two solutions combined effectively provide a feature rich, low cost, highly available, auto scaling, secure alternative to a traditional relational database or even something like mongo. Not to mention fine grained access control can be taken full advantage of using signed urls to communicate with each service through federated identities in aws cognito.