Journal: Valid Parentheses, Interview Prep Pivot & LangChain Memory
Full-stack Engineer specializing in Node.js, Nest.js, MERN. Expert in building scalable APIs & real-time apps. Focus on clean code, Security, and performance.
Cracking Valid Parentheses with a Stack
Picked up LeetCode problem #20 today — Valid Parentheses. Honestly not a hard one once you recognize it's a classic stack problem. The core idea is simple: every time you see an opening bracket, push it onto the stack. When you hit a closing bracket, peek at the top of the stack and check if it's the matching opener.
I started by initializing an empty stack and a hash map that pairs each closing bracket to its corresponding opening bracket — so } maps to {, ] to [, and ) to (. Then I looped through every character in the string. If it's an opening bracket, push it and continue. If it's a closing bracket, pop the last element from the stack and check if it matches what the hash map expects. If it doesn't match, the string is invalid — return false right there.
After the loop, the final check is whether the stack is empty. If there are still elements sitting in it, that means some opening brackets were never closed, so it's still invalid. Empty stack means we're good.
function isValid(s: string): boolean {
const stack = []
const pair = {
')': '(',
'}': '{',
']': '['
}
for (let bracket of s) {
if (bracket === '(' || bracket === '{' || bracket === '[') {
stack.push(bracket)
continue
}
const lastBracket = stack.at(-1)
if (lastBracket !== pair[bracket]) return false
stack.pop()
}
return !stack.length
};
Why I'm Pausing LeetCode for HackerRank
While browsing around today I ended up on HackerRank and spent some time thinking about interview prep strategy. A few of the bigger tech companies here in Nepal actually use HackerRank for their technical screening rounds, which I didn't realize before. That changes things a bit.
I'm going to put LeetCode on hold for now and shift focus to HackerRank. I also want to take the front-end developer certification they offer — and honestly any other relevant certification on the platform. Better to align prep with what companies are actually going to throw at me.
LangChain Memory — Four Types Worth Knowing
Watched a couple of LangChain videos today. The first was mostly introductory stuff — how to set up the library, specify models, write prompts, and parse outputs. That level of detail is pretty well covered in the official docs anyway.
The second video was the more interesting one. It digs into how memory works in LangChain, which is really the core of building any useful chatbot.
Buffer Memory is the straightforward one — store the entire conversation history and feed it all back on each call. Simple, but it gets expensive fast.
Buffer Window Memory is a smarter version — instead of keeping everything, you only retain the last k messages (say, 5 or 10). Anything older just gets dropped. Good for keeping token usage in check.
Token Buffer Memory puts a hard cap on the number of tokens the model can use for context — something like 50 or 100 tokens max. It's a more granular way to manage cost vs. context tradeoff.
Summary Memory is the one that actually excited me. After each exchange, you ask the model to summarize the conversation so far and store that summary instead of the raw messages. You're keeping the semantic content of the whole conversation without burning tokens on verbatim history. This feels like the most practical approach for real-world chatbots.
Vector Data Memory. Instead of keeping conversation history in a list or summary, you store it in a vector database. When you need context, you query the DB for semantically relevant chunks rather than pulling everything. This is probably the most powerful option for long-running or knowledge-heavy applications.
Short day but productive across three different areas — algorithm practice, interview strategy, and AI tooling. The LangChain memory types are going to be useful soon.

