Tag: coding

  • How I Turned an Idea into a Fully Functional WordPress Plugin (With a Little Help from Gemini) 🚀

    How I Turned an Idea into a Fully Functional WordPress Plugin (With a Little Help from Gemini) 🚀

    Ever wanted your StoryGraph reading lists to live right on your blog? I did and teaming up with Google’s Gemini as my AI vibe code partner turned it into a crash course in agile development and creative problem-solving.

    Lately, I’ve been on a mission to level up my Gen AI skills and not just in theory, but by actually building things. I’m not a developer by trade or even training, but I’ve been experimenting with what’s possible using tools like Claude, ChatGPT, and Gemini.

    Last night a project sort of took on a life of its own. I wanted to create a WordPress plugin that pulls in my StoryGraph reading lists and displays them on my blog. It started as a “let’s see if this is even possible” experiment and quickly evolved into a messy, fun, but very educational journey through AI-assisted coding and constant pivoting.

    During this project there were three big pivots:

    1️⃣ Scraping Strategy → 403s Everywhere

    Since The StoryGraph lacks an official API, the first idea was to have the plugin scrape my StoryGraph reading lists which was a solution that Gemini shared early on. Technically, it could’ve worked… but we immediately ran into 403 errors. StoryGraph’s anti-bot protection (shoutout to Cloudflare 👋) was not having it. And while the data was public, scraping without permission lives in a legal area that’s not something I wanted to mess with. It was time to pivot.

    2️⃣ RSS Feeds → Behind a Paywall → Deprecated Functionality

    Next up: RSS. We reworked the plugin to pull from StoryGraph’s XML feeds… only to hit another wall. Turns out they’re locked behind a “Plus” subscription. I ponied up and then found out the feeds were deprecated. Classic AI wild goose chase. Those “Check these statements. AI can be wrong.” warnings should indeed be heeded.

    3️⃣ Data Export → Bingo

    Finally, after digging through account settings, I spotted StoryGraph’s Export Data feature. One quick download later and boom: a clean, reliable source of truth. No scraping, no chasing ghosts. Just good old-fashioned data export.

    🔧 The Plugin Takes Shape

    The next pieces to build were the actual plug-in functionality which was much faster than figuring out how to retrieve the data.

    • Admin Upload: Built a simple upload screen in the WordPress admin where I can drop in the data file instead of relying on live fetches.
    • Smart Parsing: Gemini helped map out fields like Title, Author, and Read Status. We ironed out some bugs around file types and capitalization quirks.
    • Cover Art Magic: For a more visual display, we hooked into the Open Library API to grab book covers using ISBN numbers.

    📈 The Result

    What began as a plain text list can now be a polished, dynamic, and visual display of my reading life right on my WordPress site. This project wasn’t just about writing code. It was about thinking creatively, adapting on the fly, and using AI as a true vibe-coding partner to get to a smarter solution.

    🔗 You can check it out it here: jamesk.xyz/books

     

  • Building Faithly: The Technical Reality Behind My Spiritual AI (Part 2 of 4)

    Building Faithly: The Technical Reality Behind My Spiritual AI (Part 2 of 4)

    When I set out to build Faithly, my vision was ambitious: create a chatbot that could support people in their spiritual life—offering scripture, encouragement, and interpretation rooted in Christian tradition. The journey from idea to MVP wasn’t smooth, but every technical challenge pushed me closer to something real, functional, and surprisingly powerful.

    Here’s the honest breakdown of what it actually took to build a spiritual AI from scratch.

    My Tech Stack (Or: How I Made Simple Things Complicated)

    Looking back, my tool choices tell the story of someone who wanted to learn everything the hard way:

    Python became my backbone for all the backend logic and scripting. It felt like the right choice for AI work, and honestly, it was one of the few languages I felt remotely confident in.

    OpenAI’s API powered the intelligence, specifically their text-embedding-3-small model for generating vector embeddings of Bible verses. This was where the real magic happened—turning ancient text into mathematical representations that could be searched and compared.

    ChromaDB served as my lightweight, local vector database for fast retrieval and search. I chose it because it seemed simpler than alternatives like Pinecone or Weaviate, though “simpler” is relative when you’re learning vector databases from scratch.

    JSON became my data format of choice for processing Bible verses with metadata (book, chapter, verse). Clean, structured, and easy to work with—when it wasn’t breaking my scripts with encoding issues.

    DigitalOcean VPS hosted everything in a virtual Python environment. This was probably overkill, but I wanted to understand the infrastructure from the ground up.

    Ghost (third-party managed) eventually became my solution for the public-facing Faithly blog and downloadable resources. More on why “eventually” in a moment.

    Canva handled the design work for Bible study templates and digital goods. Sometimes the best technical solution is admitting you’re not a designer.

    The Strategies That Actually Worked

    Through trial and error (mostly error), I developed some approaches that kept the project moving forward:

    MVP First, Features Later was my mantra. I focused solely on core functionality: embedding scripture and retrieving it based on user queries. No fancy UI, no advanced features—just the essential engine that could match user questions to relevant verses.

    Batch Processing for Embedding became essential when I hit the wall of API quotas and RAM limits. Processing 10 verses at a time kept me within OpenAI’s rate limits and prevented my 454MB RAM VPS from crashing.

    Resume from Failures saved my sanity. When my script inevitably crashed midway through batch 421 (yes, I counted), I added start_index logic to resume exactly where it left off without reprocessing thousands of verses.

    Prompt Engineering for RAG was where I spent way too much time experimenting. Getting the right format for scripture plus metadata to produce relevant results from OpenAI’s completion model was part art, part science, and part stubborn persistence.

    Ghost for Simplicity was my eventual surrender to pragmatism. After banging my head against manual server setups, I pivoted to a $6/month hosted Ghost blog. Sometimes the best technical decision is knowing when to stop being technical.

    The Technical Challenges That Humbled Me

    Every ambitious project has its reality checks. Here were mine:

    Database Nightmares started early. My attempts to self-host Ghost on DigitalOcean turned into a comedy of database connection errors. “Access denied for user ‘ghost’@‘localhost’” became my nemesis. I eventually scrapped the entire droplet and started over, which taught me the value of managed services.

    API Quotas and RAM Limits created a perfect storm of constraints. OpenAI’s API limits meant I couldn’t just fire off requests as fast as I wanted, and my VPS’s 454MB RAM made it impossible to process the entire Bible in one go. This forced me to build a custom batch/resume system that actually made the whole process more robust.

    Classic Python Pitfalls humbled me regularly. Unterminated string literals, malformed if __name__ == "__main__" blocks, encoding issues with biblical text—I hit every rookie mistake in the book. Each error taught me more about Python than I wanted to learn, but the debugging skills proved invaluable.

    ChromaDB Persistence was trickier than expected. Making sure my vector storage survived server reboots required some trial-and-error and careful path setup. Getting that ./chroma_db directory configured correctly was a small victory that felt huge at the time.

    What I Learned (The Hard Way)

    Building Faithly taught me some lessons that go beyond the technical details:

    Don’t Overengineer Early was probably the biggest one. Going straight to a VPS and manual configuration slowed me down significantly. Using managed services for the parts that weren’t core to my learning (like the blog) was a game-changer.

    Control the Controllables became my philosophy when dealing with the Bible’s massive scope. Nearly 800,000 words across 66 books meant I needed to be surgical about batching, error handling, and memory management. You can’t brute-force your way through datasets this large.

    Build in Resilience from day one. Crashes happen, APIs fail, servers reboot unexpectedly. Having a resume function didn’t just save hours of reprocessing time—it gave me the confidence to experiment knowing I could recover from failures.

    The Unexpected Wins

    Despite all the challenges, some things worked better than expected. The vector embeddings were surprisingly good at finding relevant verses, even for complex spiritual questions. The batching system, born out of necessity, actually made the whole process more stable and debuggable.

    Most importantly, I learned that building something real—even if it’s not perfect—teaches you more than any tutorial or course ever could.

    Coming Up Next

    In Part 3, I’ll dive into the theological minefield I walked into: how do you handle denominational differences when different Christian traditions interpret the same verses completely differently? Spoiler alert: it’s more complex than I thought.


    This is Part 2 of a 4-part series on building AI for spiritual conversations. What technical challenges have surprised you in your AI projects? Share your stories in the comments.