Developing Bible AI API: Extracting Core Functionality from a 10-Year-Old Bot

Ten years ago, I built ScriptureBot—a simple Telegram bot that fetched Bible passages from BibleGateway. It worked, people used it, and for a decade, it chugged along doing its one job well.

But recently I had a new idea for interaction with the Bible, and a new tool on my hands; LLM-assisted coding! And as a trained software engineer whose instincts scream at me when I duplicate code, I thought… Why not make a service that both ScriptureBot and Discipleship Journal could use?

Pulling Out the Core

The first step was identifying what made ScriptureBot valuable: its ability to reliably fetch and parse Bible passages. Rather than rebuilding this from scratch, I migrated the existing functionality into a dedicated API. This gave me:

  1. A clean separation between the UI (Telegram bot) and the data layer
  2. Reusable components that could power multiple applications
  3. A foundation for adding new capabilities without breaking the old bot

This migration was straightforward because the core logic was already battle-tested over 10 years of operation.

Then - as you do - I broke it lots of times in the next two steps.

Two Key Additions

With the core functionality extracted, I added two significant capabilities:

1. LLM Query Functionality

The most exciting addition: the ability to ask questions about Bible passages. Instead of just fetching “John 3:16,” I designed a query API that allows a client application to pass in the passage reference and a prompt together.

This works, but needs high-quality user testing. I’m using it primarily in my Discipleship Journal project, where it helps with personal study and reflection.

For good resilience, I made this a modular LLM client that supports multiple providers (OpenAI, Gemini, DeepSeek) with automatic failover. If one service is down or I run out of credits, it seamlessly switches to another.

2. Expanded Bible Sources

The original bot only used BibleGateway. The new API adds support for:

  • BibleHub (for multiple translations and commentaries)
  • BibleNow (as a backup source)
  • Multiple versions (ESV, NIV, NASB, etc.)

This redundancy ensures the API remains reliable even if one source goes down or changes its structure, and also provides access to some less common versions.

The AI-Assisted Development Process

Here’s the interesting part: most of this work was done by AI assistants.

How AI Helped:

  • Google Jules / Antigravity: Generated the initial API structure and migration plan
  • Claude Code: Assisted with the Go implementation and Cloud Run configuration
  • Various AI tools: Helped debug issues, write tests, and document the API

What I Had to Do:

  • Set up GCP infrastructure (Cloud Run, IAM, networking)
  • Learn Bruno for API testing (a surprisingly pleasant experience)
  • Integrate everything into a cohesive system
  • Quality assurance and edge case testing

The Infrastructure

Currently, the system runs across multiple Google Cloud projects. This wasn’t the original plan, but it happened organically as different components were developed at different times.

The current state:

  • Bible AI API: One GCP project
  • Discipleship Journal: Another GCP project
  • Various supporting services: Scattered across projects

The ideal state: Everything consolidated into a single, well-organized GCP project.

The lazy reality: I haven’t consolidated yet because it means redeploying everything. The system works as-is, so the motivation to reorganize is low.

Lessons Learned

  1. Extraction beats rewriting: Migrating proven functionality is faster and more reliable than starting from scratch.
  2. AI accelerates, doesn’t replace: AI tools are fantastic for generating code and solving specific problems, but human oversight is still essential for architecture and quality.
  3. Infrastructure debt accumulates: Multiple GCP projects seemed fine at the time, but now represent technical debt.
  4. Tool learning pays off: Taking the time to learn Bruno for API testing was worth the investment.
  5. Start with what works: The LLM query functionality is usable now, even if it needs more testing and refinement.

What’s Next

  1. User testing: Get more people using the LLM query functionality to identify edge cases and improve quality
  2. Source expansion: Add more Bible translations and study resources
  3. Infrastructure consolidation: Eventually migrate everything to a single GCP project (possibly with Pulumi)
  4. New applications: Build more tools on top of the API beyond the Discipleship Journal