Skip to main content

Command Palette

Search for a command to run...

Journal: LangChain Deep Dive, Vector DBs, and a Clock Conversion Puzzle

Published
5 min read
R

Full-stack Engineer specializing in Node.js, Nest.js, MERN. Expert in building scalable APIs & real-time apps. Focus on clean code, Security, and performance.

LangChain Chains: Sequential vs. Routing

Spent the morning going through LangChain chaining. The core idea is straightforward — you build multiple prompt chains and wire them together so the output of one feeds into the input of the next. Think of it as an assembly line for LLM calls. You kick off the first chain with raw user input, and by the end of the pipeline you've got a final, refined response.

The more interesting bit was the routing chain. The use case that clicked for me: imagine a chatbot that supports multiple domains — physics, math, history, science. Each domain has its own system prompt tuned for that subject. When a user asks something, the routing chain figures out which "expert" chain is the best fit, then passes the input there. There's also a fallback default prompt in case nothing matches.

The routing chain actually makes two LLM calls: one to decide the route, and a second to get the actual answer using the matched system prompt. Took me a moment to internalize that it's two round-trips, not one.


RAG: Four Ways to Query Documents with a Vector Database

Next up was a video on using vector databases to build a question-answering system over documents. The gist: you embed your document content into a vector store, then query it. But how you query and pass context to the LLM makes a big difference. There are four approaches covered:

  1. Stuffing — Dump the entire document content into the context window and send it with the prompt in one shot. Simple, but falls apart with large docs.

  2. Map Reduce — Process each document chunk individually, passing each one along with the prompt to the LLM. The catch is the model only ever sees one chunk at a time, so it has no awareness of what came before or after. That limits the quality of answers that need cross-document context.

  3. Refine — Similar to Map Reduce, but each iteration carries a summary of the previous response forward. It builds up context progressively, which gives the model a better running picture of the document as a whole.

  4. Map Rerank — Each document chunk gets a relevance score when processed. After scoring all chunks, the highest-scoring one gets pulled and used as the context for the final answer. Good when you want the model to focus on the most relevant passage.


AI Agents with LangChain — Moving On

Finished the agents section of the course. The pattern is: wire up an LLM as the reasoning engine, attach tools to it, and you've got an agent. Nothing too surprising there conceptually.

Honestly, I got to the end of this course and realized I haven't found a concrete project to build with this yet. The knowledge is there, but nothing has clicked as an immediate "I can build this now" moment. Closing this one out and moving to the next course.


HackerRank Algorithms: Warmup Done, Then Hit a Real Problem

Switched gears to HackerRank for some algorithm practice. The warmup subdomain had around 10-11 questions — all very beginner level. Knocked them all out in under 5 minutes, every piece of code running on the first try. Nothing worth writing home about.

Then the last question actually made me think. The task: convert a 12-hour AM/PM time string into 24-hour format. Sounds trivial. Wasn't quite.


Solving the 12-Hour to 24-Hour Clock Conversion

The tricky part is the edge cases around 12:00. The conversion rules are:

  • 12:xx AM00:xx (midnight edge case)

  • 1:xx AM through 11:xx AM → same hour, no change

  • 12:xx PM12:xx (noon, stays as-is)

  • 1:xx PM through 11:xx PM → add 12 to the hour

My first pass extracted the hour, minutes, seconds, and the AM/PM suffix into separate variables. Then I wrote the conditions:

  1. If hour is 12 and it's AM → set hour to 00

  2. If it's AM (and not 12) → keep the hour as-is

  3. If it's PM → add 12 to the hour

Ran it and immediately hit a bug. When the input was 12:xx PM, my code was outputting 24 — because it was blindly adding 12 to 12. The fix was one extra condition: if the hour is already 12 and it's PM, just leave it at 12. Don't add anything.

function timeConversion(s: string): string {
    const h = s[0] + s[1]
    const m = s[3] + s[4]
    const sec = s[6] + s[7]
    const isAM = s[8] + s[9] === 'AM'
    let fullHour = ''
    
    if (h === '12' && isAM) fullHour = '00'
    else if (isAM) fullHour = h
    else if (h === '12') fullHour = '12'
    else fullHour = `${12 + +h}`
    
    fullHour += `:\({m}:\){sec}`
    return fullHour
}

input 07:05:45PM
output 19:05:45

That one edge case was the whole puzzle, really. After patching it, all test cases passed.


Up Next: GitHub Copilot for Agentic Coding

Picked up a Udemy course by Tom Phillips on using GitHub Copilot for agentic coding workflows — how to use it to actually build applications, not just autocomplete snippets. Just getting started, carrying this into tomorrow.

5 views