ARArchitecture

How this site meets the internet.

A deliberate AWS layout for a low-volume personal domain: cheap at rest, boring to operate, and useful while I study for the Associate cert — documented here as build-in-public notes, not marketing fluff.

The shape of it

Visitors hit Amazon CloudFront first — TLS at the edge, caching rules, and a single DNS front door. CloudFront does not send every request to the same place: it picks an origin based on path patterns (behaviors) you configure. Getting that order wrong is the difference between a styled site and a broken one.

Browser ──► DNS ──► CloudFront (TLS)
                          │
          ┌───────────────┴───────────────┐
          │                               │
    /assets/*                     default (*)
          │                               │
          ▼                               ▼
   Object storage               Application load
   (hashed JS/CSS,              balancer → compute
    long-lived cache)            (SSR, POSTs, RSS)
                                        │
                                        ▼
                                  Managed containers
                                  (Node + app server)
                                        │
                          ┌───────────────┴───────────────┐
                          ▼                               ▼
                   Key-value store                 Email API
                   (signups)                      (notifications)

The split origin matters: hashed bundles under /assets/* are ideal for the immutable, cache-friendly edge. HTML, server functions, and anything that must execute on the server stay on compute behind a load balancer. Viewer HTTPS does not require HTTPS to the app tier; a common pattern is HTTPS at the CDN and HTTP on port 80 from the CDN to the balancer inside your VPC — as long as you are comfortable with that hop and lock down security groups.

Rule of thumb: the /assets/* behavior must exist and must be more specific than the default catch-all. If only * points at compute, the browser still requests /assets/*.css — and you will get 404s for styles because those files are not meant to be resolved the same way as API routes. Icons and the header mark in this repo are bundled as hashed /assets/… URLs so they follow the same path as the rest of the client build. Essay and certification images use /images/…; those bytes are read from the built client output on the server when the request hits compute.

Why a load balancer (and not “containers only”)

Tasks come and go. A load balancer gives the CDN a stable DNS name, connection pooling, and health checks before traffic is considered good. It is the textbook shape for “HTTP service behind a CDN” — familiar on résumés and in cert material — without pretending a personal blog needs multi-region active-active.

I already know this layer from day jobs; I am still choosing it here because best practice for this pattern is “balancer in front of the service,” not because this traffic requires exotic L7 features. The goal is correct, documented, recoverable more than novel.

Identity and data plane

  • Task IAM role on the workload for the key-value store and the email API — no long-lived access keys baked into the image when I can avoid it.
  • Dispatch signups use a single-table design with a string partition key on email (see server code).
  • Transactional email for a small admin notification path; sandbox rules apply until production access.
  • Secrets / env for non-IAM config — Parameter Store or Secrets Manager in prod, not the repo.
  • Networking: the compute tier should accept application traffic only from the balancer’s security group on the app port. The balancer accepts web ports from the Internet. Mixing those two groups up produces timeouts that look like application bugs.

Build once, ship twice

The HTML your server renders references exact filenames under /assets/. The object store must contain the same build output you put in the container image. Two different npm run build:aws runs → two different hashes → subtle production breakage. CI should run one build, push the image, then sync dist/client/assets/ — or a single script on the build host does both in sequence.

Cost, honestly

At surujnarine.net traffic levels, the bill should stay small: object storage + CDN for static bytes is noise, Fargate is sized tiny, the balancer has a modest hourly baseline. The expensive part is my time mis-tuning networking, not the cloud meter. I still use the pricing calculator before locking price class and task size.

Build in public

This page exists partly so the architecture is reviewable — by future me, by peers, and by anyone interviewing who wants to see how I trade constraints (cert study, operational hygiene, narrative clarity) against “ship a single global worker and call it done.” Both can be valid; this is the path I am taking for this domain right now. Intentionally no account IDs, internal hostnames, or resource ARNs here — those belong in private runbooks and version control for operators, not in crawlable prose.

Related

  • Colophon — stack, fonts, and how the repo is wired locally.
  • Operator runbook (build matrix, CloudFront behaviors, compute checklist, Docker, env): docs/deploy-aws-architecture.md. When you change topology or discovery text, follow docs/seo-and-ai.md (When production hosting or architecture changes).

← Lab