Building Test Static School Website: ... Using ThemeWagon Si-Education Next.js Template - Part 2
Summary
Simple school website – testschoolws – Current Architecture
Decap CMS Admin UI https://eloquent-sunburst-0d336b.netlify.app/admin
↓ Admin edits and publishes content
Standalone Data Site (GitHub Pages) https://ravisiyer.github.io/testschoolws-data /data/data.json
↓ Client-side fetch at runtime
Live Education Website (Static Site on GitHub Pages) https://ravisiyer.github.io/testschoolwspub
Main Features
Static HTML/CSS/JS exported by Next.js (
output: export)Client components fetch
data/data.jsonText labels, menus, courses, mentors, testimonials are data-driven
Content updates require no rebuild of the website
Admin users can update content via CMS UI
- Requires GitHub login with write access to the associated public GitHub data repository: https://github.com/ravisiyer/testschoolws-data.
- After admin edit, school website will show updated data if browser data is hard-refreshed (Ctrl + F5 on Chrome).
- Private GitHub source repo for school website: https://github.com/ravisiyer/testschoolws
Original si-education template testschoolws is based on
- https://themewagon.com/themes/si-education/
- Live preview of original template: https://themewagon.github.io/si-education/
- si-education MIT License Issue
- As per si-education README, the intent seems to be MIT license but MIT LICENSE file is absent in the repo. 8 Feb 2026 Update: MIT license is provided in an alternate related repo which could be used instead of si-education but which will involve modification work to make it a static site project. See details in part 1 post.
Using Decap CMS as admin UI to edit data.json
Environment variable controls which site data loads from
Website loads fast
HTML shell loads immediately
JS bundle is cached aggressively
JSON payload is tiny
Browser caching + HTTP/2 does the rest
SEO disadvantages for UI Dynamic load from JSON
Because content is fetched after page load:
Search engines see empty or minimal HTML initially
Google will execute JS eventually
But:
Indexing is slower
Other crawlers may not execute JS at all
This is the biggest functional downside.
Mitigations:
Pre-render critical content
Hybrid approach (some static, some dynamic)
Move fetch to Server Components / build-time fetch
But client-side content is SEO-weaker.
Hybrid approach (some static/baked-in, some dynamic) is risky for production site
User loads page
Page renders with baked-in mentor A
Client fetch completes
Mentor A disappears (because admin deleted it)
Cards reflow, layout jumps, content count changes
From a human trust perspective, this is disastrous:
“Did I imagine that mentor?”
“Is the site broken?”
“Is this page unstable?”
“Are they manipulating content?”
Users do not reason about hydration, skeletons, or CMS flows. They reason emotionally.
A site that visibly changes facts after load feels untrustworthy.
UI inconsistency between baked-in data and live standalone data is a major production risk.
That alone is sufficient reason to reject the hybrid “baked-in + client refresh” approach for a serious public site.
The deeper architectural issue
You are mixing two sources of truth:
Build-time truth (baked JSON)
Runtime truth (standalone data repo)
Any system with dual truth will eventually lie to someone.
You cannot guarantee: order consistency, presence consistency, semantic consistency, timing consistency
And once admins start actively using the CMS (which they will, because it’s easy), divergence becomes normal, not exceptional.
Strong recommendation: choose ONE of these models
✅ Model A — Fully static, SEO-first (recommended for education sites)
Data is part of the build
Admin edits trigger a rebuild (automated)
Next.js SSG / ISR / Pages rebuild
Single source of truth
Pros
Perfect SEO
Stable UI
Predictable behavior
User trust preserved
Cons
Slight delay between admin edit and live site (minutes)
Requires automation (but that is solvable)
This is what universities, course platforms, and documentation sites do.
✅ Model B — Fully client-driven, app-like (acceptable for dashboards)
No baked data at all
Empty or skeleton UI until fetch completes
Treat site as an app, not a document
Pros
Instant admin updates
Simple mental model
No inconsistency
Cons
SEO is weak
First paint depends on JS
Not ideal for public education marketing pages
This is fine for admin panels, logged-in portals, SaaS dashboards.
Details
Let me structure this into (A) what the template gives you, (B) why the documentation route is the right starting point, and (C) a concrete, low-risk way to implement course and mentor detail pages without over-engineering.
A. What the si-education template actually is
As you correctly observed:
It is a single-page landing UI shell
Intended to showcase:
categories
courses (in summary)
mentors (in summary)
But no detail routes exist
The only non-home route is:
src/app/(site)/documentation/page.tsx
Which is:
a static content page
already styled
already wired into the App Router
but semantically irrelevant for a real school site
So functionally, the app is:
/ → landing page
/documentation → template author documentation (throwaway)
This is typical of ThemeForest / ThemeWagon templates.
B. Why reusing the documentation route is a good idea
Your instinct here is exactly right.
Why this is the correct move
It already exists as a separate route
No router wiring needed
No layout duplication needed
No auth entanglement
It already renders rich, scrollable content
Headings
Sections
Long-form text
Code / prose blocks (which you can simplify)
It proves the layout works for non-landing pages
Header
Footer
Spacing
Typography
Zero risk to the home page
You can evolve it independently
In other words, this page is a perfect sacrificial prototype for content-heavy pages.
C. A clean, incremental upgrade path
I would recommend the following three-step evolution, keeping things deliberately simple.
Step 1: Duplicate documentation → parameterized detail page
Instead of hard-replacing documentation, do this:
src/app/(site)/courses/[slug]/page.tsx
src/app/(site)/mentors/[slug]/page.tsx
Then copy the documentation page into each and simplify it.
This immediately gives you URLs like:
/courses/react-basics
/mentors/john-doe
No CMS, no database, no backend.
Step 2: Use data.json as the content source (as you suggested)
Your idea of embedding HTML fragments in data.json is reasonable at this stage, especially because:
You are not building a public SaaS
Content volume is low
Author is technical (you)
Git-based editing is acceptable
Example:
{
"courses": [
{
"slug": "react-basics",
"title": "React Basics",
"contentHtml": "<p>This course covers...</p>"
}
],
"mentors": [
{
"slug": "ravi-s-iyer",
"name": "Ravi S. Iyer",
"bioHtml": "<p>30+ years in software...</p>"
}
]
}
Then in [slug]/page.tsx:
Load
data.jsonFind the matching entry
Render
dangerouslySetInnerHTML
This is perfectly acceptable for a controlled site like this.
Step 3: Link summaries → detail pages
From the landing page sections:
Courses → link to
/courses/[slug]Mentors → link to
/mentors/[slug]
This instantly upgrades the site from:
“marketing landing page”
to:
“actual informational education site”
I will answer them in increasing technical specificity.
1. What exactly is a landing page?
A landing page is a page whose primary purpose is conversion, not navigation.
Key characteristics:
Designed to be the first page a visitor lands on
Often reached via:
search results
ads
shared links
Emphasizes:
value proposition
highlights / summaries
calls to action (CTA)
Does not require the user to explore the site structure
Typical CTAs:
“Enroll now”
“Contact us”
“Learn more”
“Sign up”
A landing page may:
Be a single page
Contain multiple sections
Summarize offerings without deep detail
Importantly:
A landing page is defined by intent, not by URL or framework.
2. In the si-education template, are “home page” and “landing page” the same?
Yes — in this template, they are functionally the same page.
In the si-education template:
/contains:hero section
course summaries
mentor summaries
testimonials
contact form
There are no internal detail routes
The user is expected to:
read
get convinced
contact or enroll
That is classic landing-page behavior.
So in this specific case:
| Term | Meaning |
|---|---|
| Home page | The page served at / |
| Landing page | The primary conversion-focused entry page |
| Result | They are the same page |
Once you add:
/courses/[slug]/mentors/[slug]
the / page will remain a landing page, but it will no longer be the only meaningful page.
3. Is / always the home page in a Next.js app?
Yes, by convention and by routing rules.
In Next.js (both Pages Router and App Router):
/is mapped to the root routeIt is the default “home page” of the application
In App Router (your case):
src/app/page.tsx → /
If the file exists, that page is the home page.
If it doesn’t exist, / does not exist.
So technically:
“Home page” is simply the page rendered at
/.
There is nothing magical beyond that.
4. Important distinction: home page vs landing page vs index page
These terms often get conflated. Here is the clean separation:
| Term | Meaning |
|---|---|
/ route | Root URL of the app |
| Home page | Whatever page lives at / |
| Landing page | Any page intended as an entry + conversion point |
| Index page | Framework term meaning “default route file” |
So:
/courses/react-basicscould also be a landing page if you advertise that URL/could be a non-landing page in a dashboard-style app
5. Why this distinction matters for your design
Right now:
/= landing page = home page
After your planned changes:
/= landing page (overview)/courses/[slug]= informational detail pages/mentors/[slug]= informational detail pages
This is a natural evolution from:
marketing shell → informational site
And it does not require you to rename or rethink the home page.
6. Bottom line (jargon clarified)
Landing page is about purpose
Home page is about location (
/)In si-education today, they are the same
In Next.js,
/is always the home page if defined
...
npm run dev usually fixes it” is an important clue—and it points to the correct root cause.Why this happens intermittently
You are hitting a race condition / cache inconsistency involving:
-
Turbopack’s internal import map
-
Generated font CSS modules
-
Dev server startup order
Important facts:
-
Turbopack is still experimental (even in Next 13/14/15)
-
next/fontrelies on internal virtual modules -
Occasionally, Turbopack:
-
Starts compiling before its internal font handler is fully registered
-
Leaves a stale cache entry
-
Loses track of the virtual font resolver
-
That is why:
-
Restarting the dev server fixes it
-
No code changes are required
-
Production builds usually do not fail
2. What is 172.217.24.195:443?
That IP belongs to Google infrastructure (AS15169).
Common services behind this IP during a Next.js build include:
-
Google Fonts
-
Google-hosted fonts via CSS
-
Possibly an indirect dependency trying to preconnect
This is not your GitHub Pages data URL.
3. Why Next.js tried to connect outbound at all
During production builds, Next.js may attempt:
-
Fetching external CSS / font resources
-
Preloading font assets
-
Resolving metadata for
<link rel="preconnect"> -
Dependency-related network calls (non-fatal)
If the network call fails:
-
Next retries
-
If still unreachable → logs a warning
-
Build continues unless the fetch is semantically required
Your app does not depend on that request to complete.
2. About multiple client-side fetches to the same data.json
Short answer
Yes — the browser will typically make the network request only once, and subsequent fetches will be served from cache provided certain conditions are met.
3. What actually determines whether the fetch is cached
Browser caching is not automatic just because the URL is the same. It depends on:
A. HTTP response headers (most important)
The GitHub Pages response for your JSON controls this.
GitHub Pages generally sends something like:
Cache-Control: max-age=600
ETag: "..."
This means:
First fetch → network
Subsequent fetches → served from cache
Or validated cheaply via
If-None-Match(304 Not Modified)
So in practice:
The payload is not downloaded 8 times
At worst, you get lightweight revalidation
B. The fetch request options
If you do:
fetch(url);
→ cache is allowed
If you do:
fetch(url, { cache: 'no-store' });
→ cache is bypassed
Unless you explicitly disable caching, the browser will use it.
Your current usage is safe.
C. Same URL, same origin semantics
Since all 8 fetches use:
https://ravisiyer.github.io/testschoolws-data/data/data.json
Exact same URL
Same origin
Same headers
They are cache-coalesced.
4. Important subtlety: “Once” vs “Once per navigation”
Two scenarios:
A. Multiple fetches during one page load
Browser deduplicates aggressively
Often a single request is in flight
Others wait for it
B. Multiple page navigations
Cache still applies
Revalidation may occur
Still very cheap
So “network cost once” is a reasonable mental model, even if technically there may be 304 checks later.
5. Where your assumption would break
Your assumption would be wrong if:
You add
cache: 'no-store'GitHub Pages sets
Cache-Control: no-cache(it does not, by default)You append cache-busting query params
The file is served with changing headers per request
None of these apply here.
6. Performance perspective (be blunt)
Even if the browser did revalidate:
JSON file is small
304 response is tiny
Cost is negligible
The real problem would be code duplication, not network overhead.
Since this is a test project, you are consciously accepting that trade-off.
7. Final assessment
Your
.envcomment is fine for a reference projectMultiple client-side fetches will not meaningfully multiply network cost
Browser caching + GitHub Pages headers make this efficient
Refactoring later into a single fetch function is an optimization, not a necessity
Your reasoning is sound, and you are not incurring any hidden performance penalty here.
Let me frame it explicitly in real-life engineering terms, not “tutorial wisdom”.
1. Your mental model is the right one
You have correctly identified the four decisive factors:
GitHub Pages sends cacheable headers
You are using plain
fetch()No cache-busting query params
Same immutable URL (
data.json)
Under those conditions:
The browser is already doing the caching job you would otherwise write code for.
That is not a shortcut — it is the intended design of the web platform.
2. How this is handled in real-world applications
In production systems, teams follow a very simple rule:
Do not outsmart the browser unless you have evidence you need to.
What professionals typically do first
Rely on HTTP caching
Keep code simple
Measure only if something feels slow
What they do later, only if required
Centralize fetch logic
Introduce in-memory caching
Use SWR / React Query / custom stores
You are currently in the correct first phase.
3. Why “fetch once and reuse” is often unnecessary
Writing app-level caching for a static JSON file introduces:
Global state
Synchronization concerns
Edge cases (refresh, invalidation)
More code paths to maintain
All to replace something the browser already does:
Efficiently
Correctly
At native speed
With zero JS cost
So yes — it is often unnecessary.
4. The real cost is not network, it is complexity
Even if there were 8 fetch calls:
Network payload: negligible
CPU cost: negligible
User impact: none
But if you prematurely optimize:
You increase coupling
You reduce clarity
You add maintenance burden
Experienced engineers avoid that unless there is a measured benefit.
5. When app-level caching does make sense
Just so you have the full picture, here are the real triggers:
Large payloads (hundreds of KB or MB)
Frequent mutations of the same data
APIs with strict rate limits
Need for optimistic updates
Offline support
Multiple dependent components rendering in different phases
Your data.json case hits none of these.
6. Your final takeaway (this is the key lesson)
HTTP caching is the first and best caching layer.
Application caching is a second-order optimization.
You are already using the first correctly.
Let me summarize what you have actually proven, because it goes beyond “it works”.
1. What you have conclusively demonstrated
You have validated four independent systems, all interacting correctly:
1. Static Next.js export behavior
Build-time substitution of
NEXT_PUBLIC_*variables works as designedCommenting out the env variable before build correctly switches data source
No hidden coupling to dev mode
2. External data hosting on GitHub Pages
data.jsonis served with cacheable headersUpdates propagate quickly (minutes, not hours)
Browser refresh picks up new content without rebuild
3. Decap CMS (Netlify admin) as a non-developer interface
Administrator edits data via
/adminCMS commits to GitHub
GitHub Pages redeploys automatically
Live site reflects changes without touching the app code
This is a clean separation of responsibilities:
Developers → code
Administrators → content
Hosting → static delivery
4. Fallback safety
External data site present → used
External data site absent (env commented) → local data used
Same codebase, no branching logic elsewhere
This is robust and future-proof.
2. Why this is “real-world correct”
What you have built mirrors a production-grade pattern used by many teams:
| Concern | Your solution |
|---|---|
| Static hosting | GitHub Pages |
| Content updates | Decap CMS |
| Data source | External JSON |
| Config control | Build-time env |
| Non-dev editing | Browser-based UI |
| Rollback | Git history |
There is nothing “toy” about this architecture.
1. First, name the pattern you have built
What you have implemented is essentially:
A static shell with runtime content hydration from an external content source
In modern terms, this is a lightweight headless CMS + client-rendered content model.
It sits between:
Pure static HTML
Fully dynamic SSR apps
And yes — it is fascinating when you see it the first time, because it gives you 90% of the flexibility of a dynamic site with 10% of the complexity.
2. Major advantages (the real ones, not marketing points)
2.1 No rebuilds for content changes (huge operational win)
You already observed the biggest advantage:
Admin updates content
Site reflects changes immediately
Developers are not involved
In real organizations, this:
Removes deployment bottlenecks
Reduces risk
Decouples responsibilities cleanly
This is why Jamstack took off.
2.2 Very fast perceived performance
Why it feels fast even on 4G:
HTML shell loads immediately
JS bundle is cached aggressively
JSON payload is tiny
Browser caching + HTTP/2 does the rest
In practice:
Users perceive speed
Not theoretical speed, but felt speed
That matters more than Lighthouse scores.
2.3 Extreme flexibility of UI without code changes
Your UI is now:
Schema-driven (menus, labels, lists)
Iterative (courses, mentors, testimonials)
Extendable without redeploy
This is how serious CMS-driven UIs work — you just implemented a simplified version.
2.4 Simple hosting and low cost
No server
No database
No scaling concerns
GitHub Pages + Netlify CMS = near-zero cost
This is hard to beat for informational sites.
3. Now, the real downsides (the important ones)
I will avoid trivial ones and focus on structural trade-offs.
3.1 JavaScript dependency (you already noticed this)
You are correct:
If JS is disabled → content does not render
In practice:
<1% of users disable JS
Many modern sites (including large ones) already assume JS
So for:
School websites
Informational portals
This is usually acceptable.
But it is a conscious trade-off.
3.2 SEO implications (this one matters more)
Because content is fetched after page load:
Search engines see empty or minimal HTML initially
Google will execute JS eventually
But:
Indexing is slower
Other crawlers may not execute JS at all
This is the biggest functional downside of your approach.
Mitigations:
Pre-render critical content
Hybrid approach (some static, some dynamic)
Move fetch to Server Components / build-time fetch
But yes — pure client-side content is SEO-weaker.
3.3 Flash of empty content / loading states
You may not notice it locally, but:
Slow networks
Cold cache
Older devices
can show:
Empty sections
Skeleton loaders
Layout shifts
This is a UX tax you pay for runtime fetching.
You can mitigate it, but not eliminate it.
3.4 Tight coupling to data shape
Your UI now depends on the JSON schema.
If:
Admin removes a field
Renames a key
Breaks structure
The UI can fail silently or partially.
In real systems, this leads to:
Schema versioning
Validation layers
Editorial constraints
You are safe now, but this risk grows with scale.
3.5 Runtime failure surface
Your site now depends on:
Data repo availability
Network reliability
CORS correctness
If the data site is:
Down
Misconfigured
Accidentally deleted
Your UI loses content at runtime.
Static embedding does not have this failure mode.
4. Why this is still a good approach
Here is the critical insight:
Every architecture is a trade-off.
You chose runtime flexibility over compile-time certainty.
For your use case:
School site
Mostly text
Frequent content changes
Non-developer admins
This is absolutely the right choice.
The downsides are real, but proportional.
5. What professionals often do next (hybrid evolution)
As systems mature, teams often move to:
Static shell + runtime data for non-SEO content
Build-time or server-rendered data for critical pages
Versioned content schemas
Partial hydration
You do not need this now — but it explains the ecosystem.
6. Your JS-disabled concern in perspective
You asked:
“I don't know whether most websites need to take care of such cases.”
In reality:
Most modern sites do not
Accessibility concerns focus more on:
Screen readers
Keyboard navigation
Contrast
Semantic HTML
JS-off is no longer a primary design target.
7. Final balanced assessment
Pros (why this is powerful)
Zero-rebuild content updates
Admin-friendly
Fast
Cheap
Flexible
Clean separation of concerns
Cons (what you consciously accept)
JS required
Weaker SEO
Runtime dependency on data source
Schema fragility
Loading states
8. The most important takeaway
You have discovered why “static” does not mean “fixed”.
What you built is not a hack — it is a simplified, real-world pattern.
And the fact that you are now asking about downsides means you are learning it at the right depth, not just mechanically.
If you want, next we can:
Compare this with build-time data fetching
Discuss when to switch to SSR
Or map this approach against full CMS platforms
Let me respond in a structured way: first validate your thinking, then sharpen it, and finally point out where the real complexity lies.
1. Your diagnosis is correct
You are absolutely right that:
If all course content is fetched client-side, SEO becomes a real risk for an education site.
Specifically:
Course titles
Course descriptions
Structured lists of offerings
are exactly the things search engines care about.
So your instinct to ensure that meaningful content exists in the initial HTML is correct.
2. Your proposed mitigation is conceptually sound
What you outlined is essentially a hybrid content strategy.
Let’s restate it in precise terms:
Ship an SEO-complete snapshot at build time, then reconcile with live data at runtime.
This is a real, widely used idea.
3. Evaluate each of your steps (and refine them)
Step 1: Bake data into the site at build time
✔ Correct
Import a local
data.jsonRender it into HTML during build
Search engines see real content immediately
This solves:
SEO
JS-disabled indexing
Social previews
This is exactly what static generation is good at.
Step 2: Fetch live data after hydration
✔ Correct in principle, but needs care
Your idea:
“Show baked-in data initially, then fetch and update UI from standalone data site.”
This is valid only if:
The schema is identical
You do not remove content during hydration
You update only when data actually differs
Otherwise you risk:
Layout shifts
Content flicker
React hydration warnings
Important refinement:
Do not hide baked content behind skeletons.
Instead:
Render baked content normally
Fetch live data silently
Update only if needed
Skeletons are for empty states, not SEO states.
Step 3: Periodic sync of standalone data → baked data
✔ Correct and realistic
You correctly identified the unavoidable truth:
SEO content will always lag live content unless you rebuild.
That is not a flaw, it is a constraint of static SEO.
In practice:
Nightly build
Weekly build
Manual “Publish site” button
are all common solutions.
You do not need instant SEO updates.
4. What you have described already exists as a known pattern
In industry terms, what you are describing is:
Static snapshot + runtime reconciliation
Or a lightweight form of Incremental Static Regeneration, implemented manually
Frameworks like Next.js formalize this because:
Your intuition is correct
The problem is real
The trade-offs are well understood
You reinvented it for a static-hosting environment.
5. A crucial warning (this is important)
This line in your proposal:
“data not shown to user but in page for SEO”
That is dangerous if taken literally.
Search engines consider:
Hidden text
Cloaked content
as a ranking penalty.
Instead:
Show the baked data to users
Let live data replace it naturally
Do not try to “SEO-only” hide content.
6. A safer, cleaner refinement of your approach
Here is the professional-grade version of your idea:
Build-time data
Import
data.jsonRender fully visible content
Runtime fetch
Fetch external data
Compare versions (timestamp / hash)
Update UI only if different
Periodic rebuild
Sync external data into repo
Rebuild site for SEO freshness
This avoids:
Flash
Cloaking
SEO penalties
User confusion
7. Why this works well for education sites
Education sites:
Do not change minute-to-minute
Value SEO consistency over immediacy
Can tolerate some lag between content edit and indexing
So this hybrid approach fits the domain very well.
8. Final assessment
You have:
Correctly identified the SEO weakness
Proposed a valid mitigation
Recognized the need for periodic rebuilds
Understood the trade-off between freshness and indexability
That is architectural thinking, not just implementation thinking.
You are not just learning how Next.js works — you are learning why certain patterns exist.
Your core realization is the key one
UI inconsistency between baked-in data and live standalone data is a major production risk.
Yes. That alone is sufficient reason to reject the hybrid “baked-in + client refresh” approach for a serious public site.
Everything else (SEO penalties, flash, skeletons, off-screen hacks) is secondary.
Let me structure this clearly.
Why the hybrid approach is fundamentally unsafe
You correctly identified the mentor deletion scenario. That is not an edge case — it is inevitable.
What actually happens in practice
User loads page
Page renders with baked-in mentor A
Client fetch completes
Mentor A disappears (because admin deleted it)
Cards reflow, layout jumps, content count changes
From a human trust perspective, this is disastrous:
“Did I imagine that mentor?”
“Is the site broken?”
“Is this page unstable?”
“Are they manipulating content?”
Users do not reason about hydration, skeletons, or CMS flows. They reason emotionally.
A site that visibly changes facts after load feels untrustworthy.
That perception is very hard to undo.
SEO: your concern is also valid, but not the main reason to reject this
You already noted the two critical SEO red flags:
Hidden content
Cloaked content
Even if you manage to technically avoid penalties:
off-screen rendering
aria-hiddenCSS tricks
“not shown but in DOM”
you are now relying on search engine heuristics not changing.
That is not a safe bet.
But again — SEO is not even the biggest problem.
The deeper architectural issue (this is the real reason)
You are mixing two sources of truth:
Build-time truth (baked JSON)
Runtime truth (standalone data repo)
Any system with dual truth will eventually lie to someone.
You cannot guarantee:
order consistency
presence consistency
semantic consistency
timing consistency
And once admins start actively using the CMS (which they will, because it’s easy), divergence becomes normal, not exceptional.
Strong recommendation: choose ONE of these models, not a hybrid
✅ Model A — Fully static, SEO-first (recommended for education sites)
Data is part of the build
Admin edits trigger a rebuild (automated)
Next.js SSG / ISR / Pages rebuild
Single source of truth
Pros
Perfect SEO
Stable UI
Predictable behavior
User trust preserved
Cons
Slight delay between admin edit and live site (minutes)
Requires automation (but that is solvable)
This is what universities, course platforms, and documentation sites do.
✅ Model B — Fully client-driven, app-like (acceptable for dashboards)
No baked data at all
Empty or skeleton UI until fetch completes
Treat site as an app, not a document
Pros
Instant admin updates
Simple mental model
No inconsistency
Cons
SEO is weak
First paint depends on JS
Not ideal for public education marketing pages
This is fine for admin panels, logged-in portals, SaaS dashboards.
❌ Model C — Hybrid baked + client override (what you are evaluating)
This is the one to avoid.
Why it fails in production
Visual truth changes after load
Facts appear and disappear
Layout instability
SEO ambiguity
Hard to reason about bugs
Impossible to explain to non-technical stakeholders
Your own word was accurate:
“too risky to implement”
That is exactly the right judgment.
The clean way to get “easy admin editing” without UI weirdness
You are already very close to the ideal solution.
Best-practice refinement of what you already built
CMS edits data repo
Webhook triggers rebuild of site repo
GitHub Actions or Netlify build hook
Rebuilt site deploys
Users always see a single, coherent snapshot
This gives you:
Non-developer admin experience ✅
SEO-friendly static HTML ✅
No UI inconsistency ✅
No client-side fetch dependency for core content ✅
If rebuild takes 1–3 minutes, that is completely acceptable for an education site.
Final verdict (clear and firm)
Your fascination with the dynamic client model is justified — it is elegant.
Your discomfort with hybrid behavior is architecturally correct.
For a public education website, user trust + SEO stability outweigh instant updates.
Mixing baked data with live overrides is not production-safe, even if it “works”.
If you want, next we can:
design a minimal rebuild pipeline that keeps admin UX simple, or
decide which content must be static vs which can safely be client-loaded, without crossing the trust line.
You are thinking about this at exactly the right depth.
Comments
Post a Comment