My Gita web app: Regular app backend data service is not working and so using simpler backend data service

Last updated on 8 Dec. 2025
This post is about my Gita web app deployed on Vercel. .... My post on Gita web app.

Jump to Resetting app to regular functionality

======================================

6 Dec. 2025 Update: For the time being, I am using an alternate, simpler and free JSON backend service https://vedicscriptures.github.io which provides Bhagavad Gita data in quite a different manner from the original graphql service. So I had to simplify the app to use translation and commentary for Gita verses of only one author — Swami Sivananda — and therefore disabled the Settings menu item, which previously allowed selection of translators and commentators.

The basic app functionality of showing Gita chapters and verses with Swami Sivananda translation and commentary is working. But I have not been able to test the app thoroughly and so there may be some issues. This limited functionality app is deployed at the regular app URL: https://gita-rsi.vercel.app/
end 6 Dec. 2025 Update

======================================
Related ChatGPT (CG) chat: Gita app sitedown, on 30 Nov. 2025

On 30 Nov. 2025, I found that the Gita app's backend data service - https://gql.bhagavadgita.io/graphql - provided by somebody else, and so not in my control, is not working. 

So the app showed a "Sorry! Something went wrong!" message with following technical details (To get the specific message, on 1 Dec. 2025, I re-ran the last deployment before I made changes for this issue: gita-jyk0kl1h5-ravi-s-iyers-projects.vercel.app deployed on 7 Sept. 2024) :
  • Details: An error occurred in the Server Components render. The specific message is omitted in production builds to avoid leaking sensitive details. A digest property is included on this error instance which may provide additional details about the nature of the error.
I updated the app to have a suitable error message about backend not being available. But that update was not straightforward as covered later on in this post. But first ...

Resetting app to regular functionality 

If the original graphql backend data service does start working again and, for some reason, I am not able to reset the deployed app to regular fuctionality, then any Next.js developer can clone the Open Source (MIT license) GitHub repo of this app - https://github.com/ravisiyer/gita - and deploy its main branch as production site on Vercel or some other host. This deployed app should have same functionality as earlier (before this issue).

I have made only Readme document changes in main branch to mention this issue. After this issue was encountered, I have not changed any code in it. All code changes have been made only to sitedown and datarest branches of the repo. The current limited functionality app is built from datarest branch.

When logged into Vercel, I can run the last deployment before I made changes for this issue: gita-jyk0kl1h5-ravi-s-iyers-projects.vercel.app deployed on 7 Sept. 2024. But for others, this URL may not work. I don't know whether for free-tier, Vercel has a feature of publishing an old deployment as an alternate production url.

===============================
Now back to issues faced for update of the app.

Initially, I simply updated the Readme of the project to reflect the above issue, commiting the change and pushed it to GitHub. That triggered an automatic deployment on Vercel which failed. The error was at the stage of generating static pages (which needs backend data service).

As a starting step, I wanted to disable the automatic deployment/build being triggered. https://vercel.com/docs/project-configuration/git-settings#ignored-build-step
indicates that if I choose Don't build anything (instead of default Automatic) for Ignored Build Step, then repo update will not trigger a build and attempted deployment.

I changed the setting to Don't build anything and saved it.
It saved it but gave some warning: "Configuration Settings in the current Production deployment differ from your current Project Settings."
...

Me: In Next.js, is there a way to disable static page generation for a build
...

Me: Can I use:
export const dynamic = "force-dynamic";
if app/layout.tsx to disable it for all pages.

You see, my backend data service is not working. So while I have put up a suitable message in the home page and removed all navigation links, the build process fails due to static page generation. So I want to disable it for the whole app.
...

[I created a new branch - sitedown - for this change.]

That worked on local dev project. 
Next I did a local build - npm run build, followed by npm run start to test it.

Me: Tailwind classes are not coming into play on running the build app; Also image size is too big

[CG gave some suggestions but that did not work out.]

I then heavily edited app\page.tsx - images, chapter summary and tiles components (which will invoke backend) etc. were commented out. At some stage, I tried a build with such code calling backend in page commented out but without disabling static generation at app\_layout.tsx . That build failed as it still tried to generate static pages having code calling backend. So looks like the build may go simply by directory and files structure rather than guess which files will come into play based on code.

I also had to comment out Navbar component in app\layout.tsx.

On local build, Tailwind classes were not being recognized. But later on Vercel build, the Tailwind classes were being recognized. I don't know why local build did not recognize them. Note that on local dev, the Tailwind classes were being recognized.

So I modified the message content style and tags so that local build shows the message in a decent format.

Next I committed changes to the sitedown branch and pushed them to remote GitHub repo with -u flag to set the upstream.

Now I wanted this sitedown branch to be used as production branch on Vercel so that the https://gita-rsi.vercel.app/ link will show latest sitedown branch content (if deployment is successful).
On Vercel, Project Settings -> Environments -> Production enabled me to do that.


I had to trigger a redeploy of sitedown latest deployment as it had got triggered automatically but cancelled due to above setting being don't build anything. That deployment was successful and the production link of https://gita-rsi.vercel.app/ showed the output as per the latest commit to sitedown branch.

....

After interaction with a friend, I visited Vercel usage page and found that this Gita web app had 100K "Edge requests" for last year, which ChatGPT said for my Next.js app with default SSG, would be 10K unique page views at a conservative estimate. I am very surprised by this. I had thought that the web app would be getting visited rarely. Of course, 10K views for a year is not a big number but I thought it would be more like 100 views for a year! 
Country/region wise, USA is almost the whole (99 %) of this 100K Edge requests with India being very distant second (0.5%). It also listed few other countries (Sweden, Singapore, Germany, Brazil etc.) with tiny percentage of visits.
...

About the backend data service down issue being possibly a transient one - I hope that is the case.
But I have my doubts. The same group, https://github.com/gita , that published the backend graphql repo, https://github.com/gita/bhagavad-gita-graphql , and its deployed URL, also have a frontend listed in the backend readme, https://bhagavadgita.io/. That frontend is still working. 

But the group has quite a few frontend and backend Bhagavad Gita repos. It also has REST API repo which seems to be working but may have limited data.

The graphql backend was awesome in the data it had. It had an array of commentators and translators in English and Hindi. So my app was able to provide all that data to users. I think a typical Gita REST API has much more limited data.

To conclude, I think having such a free data service (array of commentators & translators) was too good to last. 

[Later update: The same group has published Gita JSON data which is similar if not same as provided by the graphql backend service, at https://github.com/gita/gita/tree/main/data. It has an Unlicense license ("This is free and unencumbered software released into the public domain.") So all this Gita data is in public domain! Awesome!!!

A less organized but still quite complete JSON, publicly licensed, https://github.com/bhavykhatri/DharmicData/tree/main/SrimadBhagvadGita. end-Later update]

But I hope my doubts are proved to be wrong and that it is only a transient problem.
----------

To know what exactly was the cause for the failure in my app earlier, I re-ran the last deployment before I made changes for this issue: gita-jyk0kl1h5-ravi-s-iyers-projects.vercel.app deployed on 7 Sept. 2024. Then I was able to view the log for that deployed site. That log had more details on the error. The key part of its messages is given below:
2025-12-01 17:44:16.442 [error] Error: Failed to fetch all chapters from data source (GraphQL endpoint): https://gql.bhagavadgita.io/graphql at l (/var/task/.next/server/app/page.js:15:117) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async l (/var/task/.next/server/app/page.js:75:329) { digest: '2358542878' }
----

Me (to CG): I will give you the seq. in code that I figured out just now. Note that I am looking at Next.js code after over a year. So I am not able to easily distinguish between server and client components. I think client components need 'use client' but I am not sure.

Anyway app\page does not have 'use client' at the top and it has ChapterTiles component.

ChapterTiles component does not have 'use client' at the top and it invokes getAllChapters from data.ts 

data.ts does not have 'use client' at the top and has the code that produces that error message. Here's that code:
export async function getAllChapters() {
  const client = createApolloClient();
  try {
    const { data } = await client.query({
      query: gql`
        query {
          allGitaChapters {
            nodes {
              id
              chapterNumber
              chapterSummary
              chapterSummaryHindi
              name
              nameTranslated
              versesCount
            }
          }
        }
      `,
    });

    return {
      allGitaChapters: data.allGitaChapters.nodes,
    };
  } catch (error) {
    console.error("GraphQL Endpoint Error:", error);
    throw new Error(
      `Failed to fetch all chapters from data source (GraphQL endpoint): ${GRAPHQL_URI}`
    );
  }
}
----------

To know which routes of the app are generated as static pages, I looked at the build log for that 1 year old deployment. It showed that only _not_found and icon.ico routes were generated as static pages!
...
Me: If one makes the entire app static then will it retrieve data for all possibilities like all chapters, all verses, all commentators and all translators?

CG: Short answer: Yes — but only if YOU explicitly tell Next.js to generate all those pages at build time, which is usually impractical or even impossible for large datasets.
...

Static with respect to what routes?

There are two types of routes:

1️⃣ Static routes

Examples:

  • /

  • /about

  • /chaptersummaries

These can be statically rendered easily.

2️⃣ Dynamic routes

Examples:

  • /chapter/[chapternumber]

  • /verse/[id]

For these, Next.js needs to know every possible value of chambernumber and id at build time.


❗ If you want all chapters/verses/commentators to be static…

You MUST:

  • Provide a list of ALL chapter numbers

  • Provide a list of ALL verse IDs

  • Provide a list of ALL commentators

  • Provide a list of ALL translators

And Next.js will generate:

  • /chapter/1

  • /chapter/2

  • /verse/1

  • /verse/2

  • /commentary/1

BUT:

Next.js can only do this if you provide:

export async function generateStaticParams() { ... }

for each dynamic route.

And inside those functions, you must fetch ALL required data.


❌ Why this is usually not feasible

If your GraphQL backend is down:

  • Next.js cannot generate ANY static pages that require data.

  • Static generation will fail.

Even if backend is up:

  • The dataset may be too big.

  • You may not want to regenerate thousands of pages on every build.

  • GraphQL rate limits may slow down build.

...
Me: Got it. Now another question: Ideally the app should detect whether backend is up or down, and show the functional app or site down message as appropriate. But now the old 1 year back commit code will fail the build as (during static page generation stage, the build process needs) to talk to graphQL backend.
...

Me: If only icon and not_found are static, why does the build fail if I try with current code in main branch (which does not have export const dynamic = "force-dynamic"; in app\layout.tsx)?

CG: Great question — and the answer is subtle but very important.

Even though your old deployment prerendered only 2 static routes, the build process STILL evaluates every route during the build.
And those route evaluations can fail if the backend is down.

Let me explain clearly.


Key Insight:

Next.js App Router always evaluates routes during build — even if they end up being dynamic.

Meaning:

  • A route does not have to be static to cause build-time backend calls.

  • The build process runs the server component tree once for each route to decide if it can be static.

  • If that code fetches data unconditionally, then the build fails.

This is exactly what is happening now.


📌 Your current code path (in main branch) always fetches GraphQL in Server Components

Your homepage route:

app/page.tsx → ChapterTiles → getAllChapters()

All these are server components (no "use client"), and getAllChapters() contains this:

const { data } = await client.query(...)

This executes synchronously during build route evaluation.

So during build, Next.js tries:

  1. Load the / route

  2. Execute your server components

  3. They call the GraphQL backend

  4. Backend is down

  5. Build fails


❗ But wait — didn’t / previously generate as dynamic (ƒ)?

Yes — but this is after Next.js evaluates the route.

The process is:

Build starts
↓
Next.js loads each route
↓
Next.js executes the server component code ONCE to determine:
  - can it be static?
  - does it require dynamic rendering?
↓
If it throws an error → build fails

So even if the final decision is “this route is dynamic,”
Next.js still runs your server component once at build time.

And that run is what fails.

This is why:

✔ Old code built successfully last year (backend was UP)

❌ Same code fails today (backend is DOWN)


Summary

Your question:

If only icon and not_found are static, why does the build fail with the current code?

Answer:

Because Next.js evaluates all routes at build time to see if they can be static.
During this evaluation, it executes your server components.
Your server components fetch data from the backend immediately.
Backend is down → evaluation fails → build fails.

Even if the route is later marked as dynamic (ƒ),
the evaluation step still runs and still fails.


✔ How export const dynamic = "force-dynamic" fixes it

When placed in app/layout.tsx, it tells Next.js:

“Do NOT evaluate any routes for static render.
Treat EVERYTHING as dynamic immediately.”

So:

  • Next.js skips evaluating server components at build time

  • It does NOT fetch backend during build

  • Build succeeds even if backend is down

  • But all routes run at runtime, not build time


...

Me: Hmm. So I can use force-dynamic even in my main branch as it is only icon and not-found which are generated as static and they are small. So them being retrieved dynamically is no problem.

CG: Yes — your conclusion is correct and also matches how most real-world Next.js apps work.
...

Me: In my case, the home page itself uses ChapterTiles component unconditionally. So the build would try to execute that which will result in the fetch call which fails.
...

Me: ... So now I can do the following:
1) Use force-dynamic at app\layout.tsx (so entire app)
2) In home page and ChapterTiles component catch the error (and rethrow the error in ChapterTiles)
3) In home page if error is caught, show a suitable message that backend data service may be down and disable functionality of app.
4) In home page, if error is not caught, regular functionality of app will be in play.

what's ur view

CG: Your plan is 100% correct and is exactly how you should structure a resilient Next.js app when your backend may be down.

Here’s a precise evaluation of each step:


1. force-dynamic on app/layout.tsx

export const dynamic = "force-dynamic";

✔ Forces all routes to be dynamically rendered
✔ Prevents ANY build-time backend calls
✔ Avoids build failures when your GraphQL API is offline
✔ Still works fast on Vercel because serverless is inexpensive

This is a good choice for your app because all meaningful content comes from the backend anyway.


2. Catch errors inside ChapterTiles (and rethrow)

Best design:

  • ChapterTiles should be a pure data loader component

  • If GraphQL fetch fails → throw a meaningful error up

Example (good pattern):

export default async function ChapterTiles() {
  try {
    const data = await getAllChapters();
    return <Tiles chapters={data.allGitaChapters} />;
  } catch (err) {
    console.error("Error fetching chapters:", err);
    throw err; // Important: bubble up to parent
  }
}

✔ Clean
✔ Keeps all “deciding what to show” logic in the page, not in low-level components


3. Home page catches errors & shows graceful “backend down” UI

This is exactly right.

Example pattern:

export default async function HomePage() {
  try {
    return (
      <main>
        <ChapterTiles />
      </main>
    );
  } catch (err) {
    return (
      <main className="p-4 text-center">
        <h1 className="text-xl font-bold">Service Unavailable</h1>
        <p className="mt-2">
          Our backend data service seems to be temporarily unavailable.
        </p>
        <p className="text-sm mt-1 text-gray-600">
          Please try again in a few minutes.
        </p>
      </main>
    );
  }
}

✔ Avoids mysterious white screens / generic Next.js error
✔ Gives the user a friendly explanation
✔ The app still deploys correctly even if backend is offline


4. If no error → show normal UI

Exactly correct.

The user sees the functional app only if GraphQL responds normally.


✔️ Summary: Your design is perfect

Your approach gives you:

Requirement Achieved? Reason
Deployment works even if backend is down Yes force-dynamic prevents build eval
App shows graceful message instead of crash Yes Catch errors in page component
Only try backend calls at runtime Yes No static rendering attempted
No accidental SSG of pages depending on backend Yes Entire app dynamic


...

[My app has an error.tsx which showed the error message on the app page.]
---------------

As of now, since backend data service is down, if I make above changes, I will not be able to test the case of backend data service being up. So I think it is better to postpone making changes as suggested above till such time that the backend data service is up (and I have the time to make the changes and test).

=======================
2 Dec. 2025

Switched to new CG chat: GraphQL Gita API issue

Me: https://github.com/gita/bhagavad-gita-graphql is a GraphQL Gita API repo whose deployment is currently not working.

Neither does the repo have any Bhagavad Gita data.

Now my Gita app was using this GraphQL API which is now not available. So my app is not functional now. Are u with me so far?
...

Me: There are much more limited Bhagavad Gita datasets (perhaps one commentator, one translator, two languages) which I believe is open source. These datasets are available as JSON and also accessible through REST API.

How can I take this JSON data and convert it to a GraphQL endpoint easily? Further, are there free GraphQL server hosts available?

If I can do the above, then I can get my frontend Next.js Gita app to work with this limited dataset GraphQL server.

CG: Good questions. Yes — you absolutely can take a JSON-based Bhagavad Gita dataset and turn it into a GraphQL endpoint, and there are free (or very low-cost) hosting options for that. Below is a roadmap / set of methods (with trade-offs) to get your Gita app working again, plus some recommended free GraphQL hosting solutions.

✅ How to convert JSON data into a GraphQL endpoint

If you have static JSON files (say translations, commentaries, verses, etc.), you can quickly spin up a GraphQL server on top of them. Here are a few of the easiest ways:

Option A: Use json-graphql-server (zero or minimal code)

  • The json-graphql-server npm package lets you turn a plain JSON object into a working GraphQL API with almost no coding. (npm)

  • Basic workflow:

    1. Create a JSON object (or more commonly, a JS module exporting an object) where keys are “entity types” and values are arrays of data.
      For example:

    // db.js
    module.exports = {
      verses: [
        { id: 1, chapter: 1, verse: 1, text: "...", translation: "..." },
        { id: 2, chapter: 1, verse: 2, text: "...", translation: "..." },
        // ...
      ],
      commentaries: [
        { id: 1, verseId: 1, author: "Some Guru", comment: "..." },
        // ...
      ],
      // … other collections
    };
    
    1. Run the server with one command:

    npx json-graphql-server db.js
    

    This starts a GraphQL server (e.g. at http://localhost:3000/graphql) with auto-generated types, queries (like allVerses, Verse, etc.), even filtering, pagination, mutations if you like. (npm)
    3. Use that as your backend; point your Next.js (or React) frontend to the GraphQL endpoint.

  • Pros: Very fast to get started; no database required; minimal code.

  • Cons: For large datasets, storing everything in memory may not scale; no real database features (transactions, indexing, relational constraints). Also, no built-in authentication or advanced features.

Because your Gita dataset is relatively static (just verses and commentaries), this might be a good fit — especially for a small/medium-sized dataset.


Option B: Use a more full-fledged GraphQL + database setup

If you want scalability, querying power, maybe future additions (e.g., user annotations, bookmarking), you could load the JSON into a real database (e.g., PostgreSQL or even SQLite) and run a Node.js GraphQL server (e.g., Apollo Server, or GraphQL Yoga) over it.

Steps:

  1. Load JSON into a database — e.g., write a one-time script that reads the JSON and inserts into DB tables (verses, commentaries, etc.).

  2. Define a GraphQL schema (types for Verse, Commentary; queries for verse(chapter, verseNumber), versesByChapter(chapter), etc.).

  3. Use a GraphQL server library (Apollo Server, GraphQL Yoga, etc.) to serve the schema.

This gives you more control (indexes, efficient queries, relations, possibly mutations), but requires more initial work.


Option C: Use a schema-generator tool + then host with any GraphQL server

If you want a starting point, there are tools that can generate a GraphQL schema from JSON automatically. For example, json-to-graphql (though it's old) can take JSON and output schema definitions. (npm)

You can then import that schema into an Apollo Server (or other GraphQL server) and serve it. This is useful if your JSON has nested or more complex structure, and you want type definitions generated automatically rather than hand-writing them.


🚀 Where to host this (for free or very low cost)

Since your Gita app is likely modest in usage, many free-tier or hobby-tier platforms are perfect. Here are some good choices:

Hosting Solution Notes / Pros
Render Free Web Service tier supports Node.js apps. Good for running an Express + GraphQL server. (MakeUseOf)
Cyclic Serverless-like hosting; free tier for small APIs. Good for JSON-based GraphQL. (MakeUseOf)
Railway Simple deployment; supports databases too — good if you load JSON into PostgreSQL. (MakeUseOf)
Deta Space Fully free for small projects and simple web services. (MakeUseOf)
Fly.io Free “allowance” for small services; simple VM containers — good for Node.js + GraphQL. (MakeUseOf)
Hasura + Postgres (self-hosted or via free tier DB) If you load the JSON into Postgres, you can use Hasura GraphQL Engine to instantly expose a full GraphQL API over your data — no extra backend code required. (Medium)
  • Hasura route: Many people use Hasura to wrap relational data (Postgres) into GraphQL automatically. It’s very powerful if you want to extend beyond simple JSON → GraphQL. (OpenApps)

  • Container-based hosts (Render, Fly.io, Railway, etc.): Great if you want to control the GraphQL server yourself (e.g., custom resolvers, maybe authentication, custom logic).


💡 What I recommend for your Gita app (given your context)

Since:

  • Your data is mostly static (verses, translations, commentaries),

  • You just need a simple GraphQL API to make your Next.js frontend work again,

  • You likely prefer minimal effort so you can get back to the UI / features (dark mode, responsive layout, etc.),

I would recommend Option A: use json-graphql-server + deploy on a free platform like Render or Cyclic.

Why: It’s the quickest — likely within one evening, you can have a working GraphQL endpoint returning Gita data, and point your Next.js app to it. No need to manage a database, no need to write boilerplate resolvers.

If later you want to expand (e.g., add user bookmarks, notes, search with indexing, multi-language support, etc.), you can always migrate to a more robust setup (e.g., Postgres + Hasura or Apollo).


⚠️ Some caveats & things to watch out for

  • With json-graphql-server, all data lives in memory — for large data or many requests, performance / memory usage may become an issue.

  • If you store large text (commentaries, verses), make sure your server limits (RAM) on free tier are adequate.

  • No built-in authentication or rate limiting with the simple JSON server — if your API becomes public and popular, you may need to add security.

  • Some free hosts “sleep” your server when idle, causing a “cold start” delay for first requests after inactivity. (Reddit)


If you like, I can sketch a full folder + file layout (a minimal package.json, db.js, server file, instructions) for you — you can just clone, paste your Gita JSON, deploy, and immediately get a working GraphQL endpoint.
Do you want me to build that skeleton for you now?

Me: An alternative is to use Gita REST API instead of GraphQL API. It seems that free Bhagavad Gita REST API servers are available. 

Here is the key data.ts file in my frontend project. I think we could have an equivalent for REST API. It seems that the GrapQL APIs used are straightforward and do not use features that would not be available in a REST API. Specifically, it seems that it gets all data for a chapter's verse - all languages, all commentators, all translators. Then the display page code seems to pick language, commentator and translator. I will share verse page code later. Here's the data.ts code:
...

CG: Yes — looking at this data.ts, I can confirm your app is using GraphQL in a very “REST-like” way, and you can convert it to REST API calls without losing functionality.
...

CG: Thank you for sharing the full Verse Page code.
This confirms 100% that your frontend is using GraphQL in a REST-like way and can be moved to a REST API with extremely little change.
...

Me: Very interesting! Now while I could use a public REST API, tomorrow those servers could also go down and then I would have the same issue I have now with the GraphQL backend.

The JSON data for Bhagavad Gita (many translators and commentators, IFIRC), seems to be available publicly. Here's one: https://github.com/bhavykhatri/DharmicData/tree/main/SrimadBhagvadGita

Now if it is easy to host this JSON dataset on GitHub and get my own REST API for it, then I will be independent of others' backend data service suddenly stopping to work.

CG: Excellent — this is the best long-term solution:

✅ Host your own JSON dataset (on GitHub)

✅ Serve your own REST API

→ Your Gita app becomes fully independent of any external API.

And the good news:

It is VERY easy to serve a REST API directly from a GitHub-hosted JSON dataset

You can get a fully working REST backend in 5–10 minutes.

Let me outline the three best options, then I’ll recommend the simplest one for your case.


🥇 **BEST METHOD (Easiest + Free + Stable):

Use GitHub + jsDelivr CDN + a lightweight REST API wrapper**

You can:

  1. Put the JSON files in a GitHub repo (yours).

  2. Load those JSON files directly via CDN URLs like:

    https://cdn.jsdelivr.net/gh/<username>/<repo>/<path-to-json>
    
  3. Build a tiny REST API (Node.js + Express) that:

    • fetches JSON from CDN at runtime or

    • loads JSON locally baked into the server bundle

  4. Deploy the REST API on a free host:

    • Render

    • Cyclic

    • Railway

    • Deta Space

    • Vercel (if using Edge Functions)

This gives you a 100% independent backend.


🟦 Let’s evaluate your dataset

You shared:

👉 https://github.com/bhavykhatri/DharmicData/tree/main/SrimadBhagvadGita

That dataset includes:

  • Sanskrit text

  • transliteration

  • word-by-word meaning

  • translations

  • commentaries

Perfect for your use-case.


✔️ What REST endpoints we will create

Matching your current GraphQL needs exactly:

GET /chapters

Returns:

  • id

  • chapterNumber

  • versesCount

  • summaries

GET /chapter/:chapterNumber

Returns:

  • chapter info

  • list of verses

  • each verse with selected translation (configurable)

GET /verse/:verseId

Returns:

  • text

  • transliteration

  • wordMeanings

  • all translations

  • all commentaries

Exactly the structure your frontend expects.


⭐ Most important:

You do not need a database at all.

Your entire Bhagavad Gita dataset is static.
So a simple JSON → REST API is perfect.


🧱 Folder Structure I will create for you (example)

gita-api/
  data/
    chapters.json
    verses.json
    translations.json
    commentaries.json
  server.js
  package.json

server.js will expose:

/chapters
/chapter/:chapterNumber
/verse/:id

And read the JSON files from your repo (baked into the deployment).


🔥 The server will be extremely small

Example (this is only a preview):

import express from "express";
import chapters from "./data/chapters.json";
import verses from "./data/verses.json";
import translations from "./data/translations.json";
import commentaries from "./data/commentaries.json";

const app = express();
app.get("/chapters", (req, res) => res.json({ chapters }));

app.get("/chapter/:num", (req, res) => {
  const num = Number(req.params.num);
  const chVerses = verses.filter(v => v.chapterNumber === num);
  res.json({ chapter: { ...chapters[num-1], verses: chVerses }});
});

app.get("/verse/:id", (req, res) => {
  const id = Number(req.params.id);
  const verse = verses.find(v => v.id === id);

  const tr = translations.filter(t => t.verseId === id);
  const cm = commentaries.filter(c => c.verseId === id);

  res.json({
    verse: {
      ...verse,
      translations: tr,
      commentaries: cm
    }
  });
});

app.listen(3000);

This is all you need.


🟩 Hosting Options (Free + Recommended)

Render Free Web Service

  • Very stable

  • No credit card

  • Perfect for small Express APIs

Cyclic

  • Automatically redeploys on GitHub push

  • Perfect for small JSON APIs

Railway

  • Very simple deployment

  • Free tier works for your size

Deta Space

  • Forever free

  • Serverless functions

  • Great for small APIs


🎉 Result

You will get a permanent, self-hosted REST API such as:

https://gita-api-iyer.onrender.com/chapters
https://gita-api-iyer.onrender.com/chapter/2
https://gita-api-iyer.onrender.com/verse/23

Your Next.js app becomes completely independent.


✔️ Next Step — Choose one

**A) I create the full REST API project for you

(Express, ready to deploy, using the dataset you linked)**
→ You just click “Deploy to Render”

B) You want me to inspect the dataset first and design the correct JSON structure

→ I examine the dataset files, decide the best structure, then write the API.

C) You want both REST + GraphQL versions (same JSON)

→ I generate both servers.


👉 Which do you prefer?


Me: Here is one REST API hosted on Github.io - https://vedicscriptures.github.io/slok/1/1/

The Node Express free servers have small usage limits. GitHub.io may be more generous for free tier

CG: Yes — this is actually a very good direction, and you can absolutely do it without depending on any external backend.

Let me break down the important points clearly:


1. GitHub Pages (github.io) can act as a REST API for static JSON files

GitHub Pages only hosts static files, but that is exactly what you need.

If you upload JSON files to a GitHub repo and enable GitHub Pages, then each JSON file becomes a public URL like:

https://<username>.github.io/<repo>/data/chapter/1.json

This is exactly what the vedicscriptures.github.io API is doing.

So yes — you can do the same:

✔ No server required

✔ No rate limits

✔ No uptime problems

✔ 100% free (forever)

✔ Extremely reliable (GitHub Pages rarely goes down)


2. How to turn your JSON dataset into a REST API (GitHub Pages)

Step A — Clone dataset

Take the JSON from:
https://github.com/bhavykhatri/DharmicData/tree/main/SrimadBhagvadGita

Copy the JSON folder into your own repo, e.g.:

/data/  
   /1.json  
   /2.json  
   ...
   /18.json

Or per verse:

/v/1/1.json  
/v/1/2.json  
...

Step B — Enable GitHub Pages

In your repo:

  1. Settings → Pages

  2. Select Deploy from branch

  3. Branch: main

  4. Folder: / (root)

  5. Save

Your API base URL becomes:

https://<your-username>.github.io/<repo>/

Step C — Access JSON as REST API

Example endpoint:

https://yourname.github.io/gita-api/data/1.json

Or per-verse:

https://yourname.github.io/gita-api/v/1/1.json

You now have your own permanent REST API.


3. Using this REST API in Next.js frontend

In your Next.js code:

const chapter = await fetch(
  "https://yourname.github.io/gita-api/data/1.json"
).then(r => r.json());

4. Optional: Convert REST JSON → GraphQL

If you still want GraphQL, you can add a thin layer using:

GraphQL Mesh (recommended)

It can turn any REST API into GraphQL with zero code.

Code example

Create mesh.config.ts:

sources:
  - name: Gita
    handler:
      jsonSchema:
        baseUrl: "https://yourname.github.io/gita-api/data/"

Then run:

npx graphql-mesh serve

This creates a GraphQL endpoint that resolves REST → GraphQL.

Free hosting options for GraphQL Mesh

  • Vercel (best for Next.js + Mesh)

  • Render free tier

  • Railway free tier

These are enough because Mesh is lightweight.


Recommendation (Best & Simple Approach)

If your goal is:

  • 100% independence

  • No reliance on other servers

  • Reliable long-term hosting

  • Zero cost

Then:

✔ Host the JSON on GitHub → use it as REST API

✔ Skip GraphQL (REST performance is actually faster)

✔ If you need GraphQL, deploy Mesh on Vercel


If you want, I can prepare:

✅ A sample repo structure
✅ A ready-to-use Gita REST API on GitHub
✅ Sample Mesh config for GraphQL
✅ Example Next.js client code

Just tell me which option(s) you want.

Me: I think I can use https://vedicscriptures.github.io/ itself as of now.
I can record how to create such a github.io static JSON server site using open source Bhagavad Gita data. That becomes a fallback option if https://vedicscriptures.github.io/ goes down.
What do u say?

CG: That is exactly the right plan — practical, lightweight, long-term, and fully under your control.

...
Me: Let me share data.ts code again with you. After you go through it, can you provide me REST equivalent code that invokes https://vedicscriptures.github.io/
...

[There were many iterations.]

Me: https://vedicscriptures.github.io/slok/verseCount.json gives 404 error in browser.
let us do it step by step.
First let us do getAllChapters() implementation
https://vedicscriptures.github.io/chapters/ does provide top level info about chapters

...
Me: chaptertiles.tsx invokes this function. Here's its code:
...
[TS error]
...

Me: The error continues. Now the key file for types is gqltypes-d.ts. I have given its contents below:
...

CG: Thanks — with this full GitaChapter type definition, the issue is now 100% clear.
...

Me: Succeded but with one issue. To explain issue, I need to share app\page.tsx code: 
...
Please study it. next I will share app\layout.tsx:
...
navbar.tsx code:
...
The error I get (running on dev server): 
Unhandled Runtime Error 
Error: Hydration failed because the initial UI does not match what was rendered on the server. 
See more info here: https://nextjs.org/docs/messages/react-hydration-error 
Expected server HTML to contain a matching <section> in <header>.
 <RootLayout>
  <html>
   <body>
    <Navbar>
      <header>
      ^^^^^^^^ 
       <section>
       ^^^^^^^^^ 
Call Stack 
React

[CG gave some suggestions but that did not work out.]


After I disabled DarkReader Chrome extension, the error went away.

Me: Now the home page is working and showing 18 chapter tiles as expected.
chaptersummaries link also works.
Next we need to get each chapter page to work. That needs REST implementation for getChapter().
Can u provide that? Do you want me to reupload anything to you.

[So with some iterations with CG, the getAllChapters() function equivalent of data.ts got done in a new app\lib\datarest.ts file (in datarest git branch). datarest.ts getAllChapters() function used the vedicscriptures.github.io REST (JSON) API to get the data and transformed it to the format (graphql server format) that data.ts getAllChapters() was returning. So the calling code in my Next.js app got data from datarest.ts getAllChapters() function in the format it expected. 

I modified the calling code in chaptertiles.tsx and app\(mainapp)\chaptersummaries\page.tsx to import getAllChapters() from datarest.ts instead of data.ts. IFIRC, the calling code did not require any other changes. Thus, at this stage, most of the changes were localized to datarest.ts. That the home page and chaptersummaries page worked with minimal changes to app UI code and only addition of datarest.ts code, led me to continue with this approach of using CG's assistance to get such datarest.ts conversion code. I had done this initial step as an experiment to see if it worked out. CG's assistance was critical as the graphql format and its TypeScript data types were quite cumbersome. If I had to do this conversion manually I think it would have taken me too long and so I would not have tried it.]

...
Me: https://vedicscriptures.github.io/slok/chapter/1/ gives 404
https://vedicscriptures.github.io/chapter/1/ gives only summary info.

...
Me: Rest for slok does give multiple translations. But it follows some different pattern.
Here's the json for https://vedicscriptures.github.io/slok/1/1/

{
    "_id": "BG1.1",
    "chapter": 1,
    "verse": 1,
    "slok": "धृतराष्ट्र उवाच |\nधर्मक्षेत्रे कुरुक्षेत्रे समवेता युयुत्सवः |\nमामकाः पाण्डवाश्चैव किमकुर्वत सञ्जय ||१-१||",
    "transliteration": "dhṛtarāṣṭra uvāca .\ndharmakṣetre kurukṣetre samavetā yuyutsavaḥ .\nmāmakāḥ pāṇḍavāścaiva kimakurvata sañjaya ||1-1||",
    "tej": {
        "author": "Swami Tejomayananda",
        "ht": "।।1.1।।धृतराष्ट्र ने कहा -- हे संजय ! धर्मभूमि कुरुक्षेत्र में एकत्र हुए युद्ध के इच्छुक (युयुत्सव:) मेरे और पाण्डु के पुत्रों ने क्या किया?"
    },
    "siva": {
        "author": "Swami Sivananda",
        "et": "1.1 Dhritarashtra said  What did my people and the sons of Pandu do when they had assembled\ntogether eager for battle on the holy plain of Kurukshetra, O Sanjaya.",
        "ec": "1.1 धर्मक्षेत्रे on the holy plain? कुरुक्षेत्रे in Kurukshetra? समवेताः assembled together? युयुत्सवः desirous to fight? मामकाः my people? पाण्डवाः the sons of Pandu? च and? एव also? किम् what? अकुर्वत did do? सञ्जय O Sanjaya.Commentary Dharmakshetra -- that place which protects Dharma is Dharmakshetra. Because it was in the land of the Kurus? it was called Kurukshetra.Sanjaya is one who has conered likes and dislikes and who is impartial."
    },
    "purohit": {
        "author": "Shri Purohit Swami",
        "et": "1.1 The King Dhritarashtra asked: \"O Sanjaya! What happened on the sacred battlefield of Kurukshetra, when my people gathered against the Pandavas?\""
    },
    "chinmay": {
        "author": "Swami Chinmayananda",
        "hc": "।।1.1।। सम्पूर्ण गीता में यही एक मात्र श्लोक अन्ध वृद्ध राजा धृतराष्ट्र ने कहा है। शेष सभी श्लोक संजय के कहे हुए हैं जो धृतराष्ट्र को युद्ध के पूर्व की घटनाओं का वृत्तान्त सुना रहा था।\nनिश्चय ही अन्ध वृद्ध राजा धृतराष्ट्र को अपने भतीजे पाण्डवों के साथ किये गये घोर अन्याय का पूर्ण भान था। वह दोनों सेनाओं की तुलनात्मक शक्तियों से परिचित था। उसे अपने पुत्र की विशाल सेना की सार्मथ्य पर पूर्ण विश्वास था। यह सब कुछ होते हुये भी मन ही मन उसे अपने दुष्कर्मों के अपराध बोध से हृदय पर भार अनुभव हो रहा था और युद्ध के अन्तिम परिणाम के सम्बन्ध में भी उसे संदेह था। कुरुक्षेत्र में क्या हुआ इसके विषय में वह संजय से प्रश्न पूछता है। महर्षि वेदव्यास जी ने संजय को ऐसी दिव्य दृष्टि प्रदान की थी जिसके द्वारा वह सम्पूर्ण युद्धभूमि में हो रही घटनाओं को देख और सुन सकता था।"
    },
...
+++++++++++++++++++++++++++

In the above property "et" means English translation, "ht" means Hindi translation (only 2 languages)
"ec" means English commentary and "hc" means Hindi commentary.

[This JSON structure is quite odd. It turned out to be too difficult for CG to generate conversion function properly by studying the JSON data.]

CG: You are right — the VedicScriptures slok endpoint follows a very different JSON pattern from what your app’s current Gita Super App backend returns.

Your backend returns a compact structure like:

slok transliteration meaning word_meanings translations: [ {author, text}, ... ] commentaries: [ {author, text}, ... ]

But the VedicScriptures JSON for https://vedicscriptures.github.io/slok/1/1/ is NOT structured this way.
It is completely flattened by author, and the field names differ per author.

Here is the pattern:

✔ Single root object

No translations[], no commentaries[].

✔ Each author is a top-level key

Examples:

  • tej

  • siva

  • purohit

  • chinmay

  • san

  • adi

  • gambir

  • (about 20+ keys)

✔ Inside each key, the content varies!

Author keys have different field names:

Author keyFields you get
tejauthor, ht (Hindi translation)
sivaauthor, et (English translation), ec (English commentary)
purohitauthor, et
chinmayauthor, hc (Hindi commentary)
sanauthor, et

There is no consistent naming.
Some keys have translations, some commentaries, some both.

...

📌 Summary

You did NOT miss anything — their format is completely different, and requires special parsing logic.

If you want, I can generate a unified adapter that converts any slok JSON (your backend / vedicscriptures / GitHub JSON sources) into one common structure so your UI stays the same.

Just say:
“Generate parser”

Me: Generate parser
...

Me: Now can you generate the getChapter code in our datarest.ts to incorporate this feature, such that it replicates what data.ts did with GraphQL backend.
...

Me: You need to get each verse individually as the chapter api gives only summary info. for that chapter
...

Me: ... You can assume a constant array that maps translatorauthorId to translator author name (which is what the JSON returns). Then you need to pick up "et" only entry that matches translator author name (argument passed as Id), and return such translations for each verse.

[Some iterations.]

Me: Now, in the code I had given earlier, this part:
  return {
    gitaChapter: {
      nodeId: `chapter-${chapterNumber}`,
      id: chapterNumber.toString(),
      chapterNumber: meta.chapter_number,
      name: meta.name,
      nameTranslated: meta.transliteration,
      chapterSummary: meta.chaptersummary,
      chapterSummaryHindi: meta.chaptersummaryhindi,
      versesCount,
      verses,
    },
  };
======
instead of verses
gitaVersesByChapterId is expected.

Can you fix the whole function code to match the above?

[CG was struggling to provide a proper solution. So I debugged running code (with console.log for React Server side code which most of the modified code, including data.ts, was), found out what data was not being provided correctly and then hard-coded the function to return Swami Sivananda data only, but in the graphql return data format. With this, the getChapter() function got done. The function is a little slow as it retrieves json for each verse in the chapter separately from the backend and so the user has to typically wait for noticeable time (few seconds) for the chapter page to show. I need to check with CG whether this can cause an issue with GitHub (e.g. 78 verses downloaded in parallel with promise.all used to wait for all to complete).
However, the function works in getting Swami Sivanada data and showing it to to the user. I then tried getting CG to give me some code for getVerse in line with my getChapter function. Once again CG struggled but it provided me some basic code which I could modify to return only Swami Sivananda data.]
...

Me: 10:33, 03 December 2025
I simplified and modified the getVerse function. Now it is working with one hardcoded translator and commentator's data being returned for a verse. Thanks for your help.
From my web app About page update:
For the time being, I am using an alternate JSON backend service https://vedicscriptures.github.io which provides Bhagavad Gita data in quite a different manner from the original graphql service. So I had to simplify the app to use translation and commentary for Gita verses of only one author - Swami Sivananda. I also have not been able to test the app properly and so there may be some issues. But the basic functionality of showing Gita chapters and verses with Swami Sivananda translation and commentary is working.
--- end About page extract ---
...

[After getVerse was working with only Swami Sivananda data, the app was functional with only Swami Sivananda data. As mentioned above, I modified About page with suitable message. Then I added short note in Home page.]

CG: ... 

Ensuring Space Around Links in HTML (Notes for Future Reference)

Problem:
Inline elements like <a> collapse surrounding whitespace.
So a normal space written in the HTML source may not appear as expected — especially if the space is next to a line break or multiple spaces. This can cause the link to appear “stuck” to the previous or next word.

Solutions depending on the setup:

1. Using Tailwind CSS

  • Solution: Add horizontal margin to the link.

  • Example:

    <a href="/about" class="mx-1">About</a>
  • Why it works: Tailwind applies real CSS margins, so spacing is guaranteed.


2. Using Plain CSS (no Tailwind)

  • Solution: Add margin-left / margin-right via CSS.

  • Example:

    <a href="/about" style="margin: 0 4px;">About</a>
  • Why it works: CSS margins create predictable spacing regardless of HTML whitespace collapsing.


3. Using Pure HTML With No CSS

  • Solution: Use &nbsp; (non-breaking space) on each side of the link.

  • Example:

    Some text&nbsp;<a href="/about">About</a>&nbsp;continues here.
  • Why it works: &nbsp; never collapses, so spacing is always preserved.


Summary Table

ScenarioProblem SolvedBest Method
Tailwind presentPrevent collapsed whitespace around linksclass="mx-1"
Plain CSSNeed reliable spacing without depending on HTML whitespace margin: 0 4px;
Pure HTML (no CSS at all) Only HTML available, whitespace collapsing unavoidableUse &nbsp;

...

[I also disabled Settings menu item.]

Me: I now have two GitHub repo branches: main and datarest.
main branch has regular version of app which uses graphql backend.
datarest branch has version of app using alternative simple backend, and so app with limited functionality.
Visitors to the Github repo will see the main branch and its Readme by default.
So I was planning to update the Readme to be same for datarest and main branch. The readme points to my blog post which has details of the issue, these branches etc.

But is that the right approach? It struck me that main branch Readme should reflect main branch app with a reference to datarest branch having working alternative lesser func. app. and its Readme.

What's ur view? What is the typical industry practice in such cases?

CG: Here’s the industry-standard way to handle this situation very clearly and cleanly, and it matches what experienced open-source maintainers do.

Recommended Approach (Best Practice)

➡️ Keep the main branch README focused on the main branch

The main branch is what visitors see first.
Its README should describe:

  • the intended, full-functionality version of your app

  • that it depends on the GQL backend (which is currently down)

  • and provide a short, prominent note pointing users to the working datarest branch

➡️ The datarest branch should have its own README describing the fallback/alternate version

This README should explain:

  • this is a limited-functionality version

  • uses the alternate REST/JSON backend

  • is kept temporarily active because the main backend is down

  • link back to the blog post if needed

...
[I followed the above suggestion (perhaps with minor variations).]

Me: The project is deployed on Vercel with the GitHub repo being linked to it. Vercel does a build for each commit made to the GitHub repo on any branch. I don't want to disconnect the repo from Vercel as that is too major a step. I could not find a simple toggle automatic deployment button on/off on Vercel. 

Instead there is in Project Settings -> Git -> Ignored Build Step a Behaviour dropdown defaulted to Automatic but which can be changed to Don't Build Anything. That attempts a deployment but stops before build. It is not so good an option as a simple toggle automatic deployment on/off available on some other hosting services. Also, it applies to whole project.

I don't want a Readme change to result in an attempted deployment and build. Also a failed deployment and build is shown as a red cross mark on teh GitHub repo.
Any suggestions?

[CG was not able to give good solutions though it gave details on vercel.json approach.]

Me: vercel.json adds some more syntax stuff to be known. Vercel UI is easier. So I don't want to have vercel.json.
...

Me: I could disable branch tracking for Preview deployments.
I have set production branch as datarest.
So I think now commits to main branch will not trigger a deployment.
...

Me: I could not figure out a way to disable production branch tracking. If I leave the branch as blank, will that work or will it go for default and pick main.

CG: Here’s how Vercel behaves — and the important part is: you cannot leave production branch blank.

✅ Key fact

Vercel always must have one production branch.
If you clear the value, Vercel automatically falls back to:

➡️ main (or master if the repo still uses that).

So leaving it blank will NOT work — it will switch back to main as the production branch and start building on every commit again.

...
Me: My main branch Readme push to GH repo did not trigger a deployment on Vercel. So that part works.

...
Me: The promote preview deployment to production in Vercel is a nice one. But it should have been combined with an option to easily prevent ***any*** production deployment on any GitHub branch update (easily, without writing hook code or vercel.json). Then I could have always checked a preview deployment and only if that is working, promote it to production.
===========

Me: 15:57, 04 December 2025
The getChapter() function we wrote retrieves json for each verse in the chapter separately from the backend. Can this cause an issue with GitHub (e.g. 78 verses downloaded in parallel with promise.all used to wait for all to complete).

CG: Short answer: No, GitHub will almost certainly NOT block, rate-limit, or complain about 78 JSON file requests at once.

But let me give you a clear, complete explanation:


✅ Will GitHub Pages have a problem with 78 parallel JSON fetches?

No — GitHub Pages can handle this easily.

✔ GitHub Pages serves static files over a CDN

Your https://vedicscriptures.github.io/... files are served through Fastly CDN, which is designed to handle thousands of concurrent requests.
Serving 78 small JSON files is trivial for it.

✔ Your traffic volume is extremely small

Even if your site had hundreds of users per day, each loading 78 small JSON objects, the load is still negligible for GitHub’s CDN.

✔ GitHub Pages has very high rate limits

GitHub rate limits API access, not static file serving.

Serving static files:

  • Has no documented request-per-second limit

  • Is intended for public websites

  • Is designed for browsers fetching many resources (HTML, JS, CSS, images, JSON, etc.)

78 JSON fetches is nothing compared to a typical web app that loads:

  • 50+ JS chunks

  • 40+ images

  • fonts

  • CSS files

  • map tiles

  • etc.

Promise.all() parallel fetch is safe

Browsers naturally make multiple parallel downloads — often 6–10 per domain at a time — but CDNs and browsers manage the scheduling automatically.

Your 78 fetches will run smoothly because:

  • The browser will queue excess requests

  • GitHub CDN will respond quickly

  • No rate limits are triggered


⚠ The only practical concern

The total time for 78 sequential HTTP requests might be slightly slower than loading one big JSON file.

But:

  • Files are tiny

  • CDN is fast

  • Parallelism reduces total wait time

So even performance-wise, it's fine.


🟢 Final verdict

Your getChapter() implementation using Promise.all to fetch each verse JSON individually is completely safe.
GitHub Pages will not block, rate-limit, or complain about this level of parallel downloading.

You can proceed confidently.

Me: U said: GitHub rate limits API access
Aren't we using API access when we use:
      fetch(
        `https://vedicscriptures.github.io/slok/${chapterNumber}/${v}/`,
        { next: { revalidate: 3600 } }
      ).then((res) => { ...

in getChapter()

CG: Great question — and the answer is no, that is not GitHub API access.

You are not hitting the GitHub API at all.

Let me explain clearly:


What you are calling is NOT GitHub API — it is GitHub Pages (static file hosting)

Your URL:

https://vedicscriptures.github.io/slok/1/1/

is served by GitHub Pages, which is:

  • just a static file CDN

  • equivalent to hosting JSON on Netlify, Vercel, S3, Firebase Hosting, Cloudflare Pages, etc.

  • not rate-limited like GitHub’s REST API (https://api.github.com/...)

You are simply fetching a static JSON file from the repository’s gh-pages (or similar) branch.

This is public, cacheable, CDN-driven static content, not programmatic API usage.


❌ This is NOT GitHub API usage

GitHub API URLs look like:

https://api.github.com/repos/owner/repo/contents/path https://api.github.com/graphql https://api.github.com/users/...

Those have strict rate limits (60–5000 req/hr depending on auth).

Your URLs look like:

https://USERNAME.github.io/path/to/file.json

That is static file delivery — totally different system.

...

Me: That's very interesting. So what then are the limits for GitHub Pages?

CG: Here are the real limits for GitHub Pages — and importantly, you are far below all of them.


1. GitHub Pages traffic limit (soft limit): ~100 GB / month

GitHub does not enforce a strict hard limit, but the documented guideline is:

“GitHub Pages sites have a soft bandwidth limit of 100 GB/month.”

This is not enforced strictly — many people exceed it — but if a site massively exceeds it, GitHub may temporarily disable the site.

Your app?

Each JSON file is ~1–5 KB.
Even fetching 100 verses × 5 KB = 500 KB per user.

You would need:

100 GB / 0.5 MB ≈ 200,000 full chapter loads per month

You're nowhere near that.


2. Rate limits: No meaningful limits for normal traffic

GitHub Pages is served through Fastly CDN.

Fastly handles:

  • burst traffic

  • parallel requests

  • high concurrency

A single user loading 78 JSON files is trivial.

There is no request-per-hour restriction like GitHub API.


3. Request concurrency limit: High (not documented)

Browsers usually open:

  • 6 parallel HTTP connections per domain

  • but with HTTP/2 (which GitHub Pages uses), all requests are multiplexed over 1 connection

So 78 JSON files load extremely fast and easily.

No throttling on GitHub's side.


⚠️ 4. File size limit

Each individual static file on GitHub Pages must be under:

1 GB per file (but effectively < 100 MB recommended)

Your files are ~1 KB → far below this.


⚠️ 5. CPU/memory

Not applicable — GitHub Pages serves only static files.


⚠️ 6. Abuse detection

GitHub may block a site if:

  • it behaves like a CDN for huge files

  • exceeds traffic by massive amounts

  • is used for file sharing

  • shows automated scraping patterns at high scale

But for a Bhagavad Gita app fetching small JSON files, this will never be an issue.


🟢 Conclusion: Your usage is safe

Your usage (multiple JSON fetches per chapter) fits perfectly within GitHub Pages' intended purpose:

  • lightweight static data

  • high redundancy through CDN

  • no API limits

  • no concurrency restrictions

  • fits within traffic guidelines

You can continue using GitHub Pages without worry.


If you want, I can also give you:

  • A strategy to reduce requests (optional)

  • How to cache data on the Next.js server

  • How to make the app work offline

Just let me know.

Me: I am asking this question in general not just for this getChapter() function ...
Are GitHub Pages limits independent of regular GitHub repo usage/limits?
I mean, if some GitHub Pages site exceeds the limit and gets stopped, will the associated GitHub account also be disabled for GitHub repo usage?

CG: Great question — and yes, the limits are independent.

Here’s the clear and authoritative breakdown:


GitHub Pages limits are separate from GitHub repo/account limits

GitHub treats Pages as a separate service from your normal repository hosting.

If a GitHub Pages site violates bandwidth limits or is disabled:

👉 Your GitHub account will NOT be banned.

👉 Your repositories will NOT be restricted.

👉 You will still be able to push, pull, commit, use issues, etc.

GitHub simply disables or rate-limits the Pages site only, not the whole account.

This is stated in multiple GitHub staff responses in discussions.


⚠️ What happens if a GitHub Pages site exceeds limits?

GitHub may take one of these actions only for Pages:

  1. Temporarily disable the public Pages site

  2. Send you a notice (email)

  3. Ask you to reduce bandwidth or move to a paid hosting/CDN

  4. In extreme cases, require you to convert to GitHub Enterprise Pages

But:

❌ They do not suspend the GitHub account

❌ They do not delete the repository

❌ They do not block git operations

The restriction applies only to:

https://username.github.io
https://orgname.github.io
https://project.github.io

Everything else continues normally.


🟢 Why they keep it separate

GitHub Pages is a free CDN service, but GitHub knows:

  • many repos use Pages for documentation

  • disabling repos would break developer workflows

  • repos are GitHub’s core product

  • Pages is a free add-on that can be restricted independently

So they isolate the two systems.


🪬 Worst-case scenario (very rare)

If a Pages site is used like a heavy CDN with huge bandwidth (tens of TB/month), GitHub might:

  • disable the Pages site

  • send a warning

  • ask to host static assets elsewhere (Cloudflare Pages, Netlify, Vercel)

But again:

Repos stay fully functional.


🟢 Bottom line

You can confidently use GitHub Pages for your JSON backend:

✔ Excess traffic will only affect the Pages site
✔ Your GitHub repos remain safe
✔ No risk of account suspension
✔ No impact on regular commits/pushes


If you want, I can also give you:

  • A simple caching strategy to reduce Pages traffic by 90%

  • A way to store all verses in one JSON file per chapter

  • Or a backup plan using Cloudflare Pages (also free, unlimited bandwidth)

Just tell me!

Me: Next general question: Can a small Node express server be hosted on GitHub Pages. I expect No as it is code and not static pages. But I wanted to be sure.

CG: Short answer: No — you cannot host a Node/Express server on GitHub Pages.

Here is the clear explanation:


GitHub Pages cannot run server-side code

GitHub Pages supports only static hosting, meaning:

  • HTML

  • CSS

  • JavaScript (running in the browser only)

  • Images

  • JSON

  • Markdown

But it does not support:

  • Node.js

  • Express

  • Python / Django / Flask

  • PHP

  • Ruby

  • Databases

  • Any backend container

GitHub Pages has zero server-side execution capability.


Why it cannot run Express

A Node/Express app needs:

  • A running Node.js process

  • An ability to accept HTTP requests dynamically

  • A server environment

GitHub Pages is just a static CDN. It serves files from disk, nothing more.

So Express cannot run there.

Me: What then is GitHub api service? Is it a paid service which can be used for Node express servers?

CG: ...

1. GitHub API (REST / GraphQL APIs)

This is what developers use to read/write repo metadata, such as:

  • list issues

  • create a PR

  • read repo files

  • manage releases

  • automate CI/CD

✔️ It is not a hosting service for running Node/Express servers.

It only lets you interact with GitHub repositories.

Pricing

  • Free tier has rate limits (e.g., 60 requests/hour unauthenticated, 5,000 requests/hour authenticated).

  • Paid GitHub plans increase some limits, but none of them allow running servers.

Where the GitHub API is used

  • GitHub Actions

  • Tools like npm, pnpm, brew, etc., when they fetch releases

  • Third-party apps (e.g., GitHub Desktop, VSCode Git integration)

...

🔍 So what is GitHub API service for?

Examples:

Example 1 — Get the contents of a file

GET https://api.github.com/repos/user/repo/contents/path/file.json

Example 2 — List commits

GET https://api.github.com/repos/user/repo/commits

Example 3 — Create an issue

POST https://api.github.com/repos/user/repo/issues

These are operational APIs for GitHub’s own data.

...

Me: Got it. So even without GitHub Pages, we can use requests like:
GET https://api.github.com/repos/user/repo/contents/path/file.json
to get json from a GitHub repo.
However, stringent rate limits may come into play. Which is why deploying it on GitHub Pages may be a better option. Am I on right track?

CG: Yes — you are exactly on the right track.

Let me lay out the reasoning clearly so that this becomes rock-solid.


✅ 1. Accessing raw JSON via GitHub API (not Pages)

You can fetch JSON directly from a GitHub repository using:

GET https://api.github.com/repos/user/repo/contents/path/file.json

But this has drawbacks:

❗ Drawback #1 — Rate limits are very strict

  • Unauthenticated: 60 requests per hour

  • Authenticated: 5,000 per hour (requires creating a token, not ideal for client-side apps)

Your Verse-by-Verse Gita app could easily hit these limits if:

  • many users open chapters frequently

  • 78 verses per chapter × many chapters × many users

This would lead to users getting 403 rate limited errors.

So you cannot depend on the GitHub API for public app users.

...

Me: When I use a git command like 'git push', the command would be using GitHub API I guess. If so, will that command's usage be counted for my rate limit for GitHub?

CG: Great question — and very important to understand correctly.

Short answer

No, git push does NOT count against your GitHub API rate limits.

Long answer (what git push actually uses)

When you run:

git push

you are not interacting with the GitHub REST API (which has strict rate limits like 60/hour).

Instead, you are using:

🔹 Git’s native protocol

  • either HTTPS Git protocol

  • or SSH Git protocol

These are completely separate from the GitHub REST API.

GitHub treats Git operations independently from the REST API rate limits.


✔️ Examples of what does not count toward API rate limits

  • git clone

  • git fetch

  • git pull

  • git push

  • LFS operations

  • Checking out branches

  • Listing commits (git log locally)

Git operations are unlimited for your own repositories.


✔️ What does count toward GitHub API rate limits

Only HTTP requests made to:

https://api.github.com/...

Using:

  • REST API

  • GraphQL API

  • curl requests

  • browser requests to API

  • GitHub Actions making API calls

  • Any app using the API endpoints

The rate limits (60/hour unauthenticated, 5000/hour authenticated) apply only to these API endpoints.


🧠 Key point

Even though GitHub internally uses services, Git operations and GitHub API are different services.

GitHub separates them so developers aren't rate-limited for normal Git usage.


...

Me: Very useful input. I will now explore the procedure for me creating my own REST API as JSON hosted on GitHub Pages (similar to vedicscriptures.github.io) . That will make me indpendent of others gita api server. But I will do this in a new chat.
====================

Comments