Have you ever used something like Claude Code, the OpenAI Codex CLI, or any other agent-style coding tool? You just give it a prompt, and it goes off for minutes making the changes you asked for. It has its quirks, sure, but it feels like magic.
Here’s the wild part: you can build a minimal version of that in just 200 lines of code. It could’ve been even shorter, but I chose readability over cleverness.
Don’t believe it? Let’s build it together. I’m calling it nano-claude-code
.
First step? Let’s spin up a new JavaScript project. I’ve never used Bun before—a fast JavaScript runtime—so this is the perfect excuse to try it.
bun init nano-claude-code
Calling Tools
How do you actually call Claude via API?
Open index.ts
and drop in the following:
import Anthropic from "@anthropic-ai/sdk";
const anthropic = new Anthropic();
const model = "claude-3-5-haiku-latest";
const maxTokens = 4096;
const messages: Anthropic.MessageParam[] = [
{
role: "user",
content:
"Based on ./README.md, what command should I use to run my project?",
},
];
const response = await anthropic.messages.create({
model,
messages,
max_tokens: maxTokens,
});
console.log(response.content);
Then run it:
bun run index.ts
Here’s what I got back:
[
{
type: "text",
text: "I apologize, but I cannot access or read a local file named \"./README.md\" from my current context. To help you determine the command to run your project, I would need you to provide the contents of the README.md file or specify the specific command mentioned in it.",
}
]
Totally fair—LLMs don’t have access to your local filesystem. But that’s easy to fix. I wrote a simple function to read a file:
import { resolve } from "node:path";
async function read_file(args: { path: string }): Promise<string> {
try {
const file = Bun.file(resolve(args.path));
return await file.text();
} catch (error) {
return `Error reading file: ${
error instanceof Error ? error.message : String(error)
}`;
}
}
Now, instead of the LLM saying "I don't know," we want it to use this tool, read the file, and then generate a proper answer. Let’s wire that up.
To define and register a tool, you need two things:
- A function to actually do the work (like
read_file
above). - A JSON definition that tells the LLM when and how to use it.
That JSON includes:
- A
name
(read_file
) - A
description
(when to use it) - An
input_schema
(how to use it)
Now let’s plug this into the code:
// JSON definition of the `read_file` tool
const tools: Anthropic.ToolUnion[] = [
{
name: "read_file",
description: "Read the contents of a file",
input_schema: {
type: "object",
properties: {
path: { type: "string", description: "The path to the file to read." },
},
required: ["path"],
},
},
];
let response = await anthropic.messages.create({
model,
tools,
messages,
max_tokens: maxTokens,
});
console.log(response.content);
// Don’t forget to save the assistant’s response
messages.push({ role: "assistant", content: response.content });
Now the LLM knows about the read_file
tool and wants to use it:
[
{
type: "text",
text: "I'll read the README.md file to find the command for running the project.",
},
{
type: "tool_use",
id: "toolu_01GJUR77TssmsB74akV8H1GY",
name: "read_file",
input: {
path: "./README.md",
},
}
]
Nice. The model is asking to read README.md
. Time to actually run the tool and feed it back into the conversation:
const toolUse = response.content[1] as Anthropic.ToolUseBlock;
const toolUseInput = toolUse.input as { path: string };
const toolResult = await read_file(toolUseInput);
messages.push({
role: "user",
content: [
{
type: "tool_result",
tool_use_id: toolUse.id,
content: toolResult,
},
],
});
console.log(JSON.stringify(messages, null, 2));
Here’s the full conversation so far:
[
{
"role": "user",
"content": "Based on ./README.md, what command should I use to run my project?"
},
{
"role": "assistant",
"content": [
{
"type": "text",
"text": "I'll read the README.md file to find the command for running the project."
},
{
"type": "tool_use",
"id": "toolu_01GJUR77TssmsB74akV8H1GY",
"name": "read_file",
"input": {
"path": "./README.md"
}
}
]
},
{
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_01GJUR77TssmsB74akV8H1GY",
"content": "# nano-claude-code\n\n<img src=\"./demo.png\" />\n\nThis repo uses [Bun](https://bun.com/get) as the JS runtime and package manager.\n\nTo install dependencies:\n\n```bash\nbun install\n```\n\nTo run `nano-claude-code`, use the following command:\n\n```bash\nbun run index.ts\n```\n"
}
]
}
]
Now we can ask Claude to give the final answer:
response = await anthropic.messages.create({
model,
tools,
messages,
max_tokens: maxTokens,
});
console.log(response.content);
And here’s what we get:
[
{
type: "text",
text: "Based on the README.md, the command to run your project is:\n\n```bash\nbun run index.ts\n```\n\nThis command uses Bun to run the `index.ts` file, which appears to be the main entry point of the project. Before running, make sure you've first installed the dependencies using `bun install`.",
}
]
Perfect. The model used a tool, read your local file, and gave a helpful, grounded answer. Now that you’ve seen how tool use works, let’s build a full agent. And trust me—it’s not as complicated as you might think.
Tools for our Code Agent
Alright, so you’ve seen how a tool works: it’s just a function the LLM can call to get information or perform actions in the environment (in our case, your local filesystem).
There are two types of tools:
- READ tools – These only observe the environment. Examples: reading a file, searching for a string in the codebase.
- WRITE tools – These change the environment. Examples: writing a file, running a shell command.
For nano-claude-code
, we’ll define three tools:
read_file
– a READ toolwrite_file
– a WRITE toolexecute_bash
– both READ and WRITE (depending on the command)
Technically,
read_file
andwrite_file
could be handled byexecute_bash
—you could just runcat index.js
orecho ... > file
. But I kept them separate for clarity.
To keep this post focused and avoid dumping boilerplate code here, I recommend checking out the tools.ts
file in the repo for the full JSON definitions and implementations of these tools.
The while
Loop
At the core of any agentic system, it’s just a while
loop.
Seriously. That’s the whole trick: You tell the LLM, “Keep using tools until you don’t need them anymore—then you’re done.”
In practice, you maintain:
- a
messages
array to store the conversation history - a
stopReason
to know when the LLM is done using tools
In practice, the stop reason is either
tool_use
(the LLM wants to use a tool) orend_turn
(the LLM is done and wants user input). For all stop reasons, you can check the Anthropic docs.
Here’s how it starts:
async function chat() {
const messages: Anthropic.MessageParam[] = [];
let stopReason: Anthropic.StopReason | null = null;
while (true) {
// If the last response wasn't a tool call, ask the user for input.
// At the first iteration, this will always be true, since stopReason is null.
if (stopReason !== "tool_use") {
const userInput = prompt("You:");
if (!userInput || userInput.toLowerCase() === "exit") {
console.log("Goodbye!");
break;
}
messages.push({ role: "user", content: userInput });
}
// ...
}
}
This loop runs until the model finishes its task.
If the last step wasn’t a tool use, we ask the user for input. If it was, we assume the model wants to continue on its own using tools, so we skip the prompt and let it go.
What happens next?
We send a request to the LLM, give it the current messages
, the available tools
, and system instructions (loaded from prompt.md
):
// ...
while (true) {
if (stopReason !== "tool_use") {
// Prompt for user input...
}
// Call the LLM.
const instructions = await read_file({
path: new URL("prompt.md", import.meta.url).pathname,
});
const response = await anthropic.messages.create({
model,
tools,
messages,
max_tokens: maxTokens,
system: instructions,
});
messages.push({ role: "assistant", content: response.content });
// ...
}
Now we handle the response.
There are two possible outcomes:
- It sends text – usually the model “thinking out loud” or planning its next move.
- It calls a tool – in that case, we run the tool and return the result back to the model.
// ...
while (true) {
if (stopReason !== "tool_use") {
// Prompt for user input...
}
// Call the LLM...
// Print the text response and process tool calls.
const toolResults: Anthropic.ContentBlockParam[] = [];
for (const block of response.content) {
if (block.type === "text") {
console.log(`Claude: ${block.text}`);
} else if (block.type === "tool_use") {
console.log(`Tool: ${block.name}`);
const toolResult = await processToolUse(block);
toolResults.push(toolResult);
}
}
if (toolResults.length > 0) {
messages.push({ role: "user", content: toolResults });
}
stopReason = response.stop_reason; // Usually "tool_use" or "end_turn".
}
That’s it. 🥳
Prompt Engineering
Remember this line from earlier?
const instructions = await read_file({
path: new URL("prompt.md", import.meta.url).pathname,
});
This is where we load the prompt that defines the agent’s behavior:
const response = await anthropic.messages.create({
// ...
system: instructions,
});
Here’s what the prompt for our code agent looks like:
You are an AI assistant specialized in code editing. Your role is to complete coding tasks efficiently using the tools provided to you. Here are your instructions:
1. Available Tools:
You have access to various code editing tools. Additionally, you can use the `execute_bash` tool to run bash commands for actions not available as native tools.
2. Task Handling:
- Carefully read and understand the task provided by the user.
- Plan your approach to complete the task efficiently.
- Use the appropriate tools to make the necessary changes or additions to the code.
- If you need to perform actions not available as native tools, use the `execute_bash` tool.
3. Using execute_bash:
- To list files in the current directory: `execute_bash("ls")`
- To search for a specific file: `execute_bash("find . -name 'filename'")`
- To search for a specific string in files: `execute_bash("grep -r 'search_string' .")`
4. Output Format:
- Provide a clear explanation of the steps you're taking to complete the task.
- After completing the task, summarize what you've done and confirm that the task is complete.
Begin by analyzing the task and the current files. Then, proceed with the necessary steps to complete the task. Remember to use the `execute_bash` tool when needed and provide clear explanations of your actions.
Notice that I spent extra time explaining how to use the execute_bash
tool because that one tends to need the most guidance. The other tools, passed to Claude via the Anthropic API, are fairly self-explanatory.
When crafting prompts like this, I start with a short description like: “You are a coding agent. Your role is to use tools to complete coding tasks provided by the user.”
Then I feed it to the Anthropic Console’s prompt generator to get a longer, optimized version like the one above. After that, I tweak a few things manually based on what I know the agent will actually need.
Let's Try It
Now that our Code Agent is ready, let’s put it to work by building a tiny marketplace from scratch.
First, run bun link
to make the project available globally as a CLI. Then, create a new folder for your project and start the agent:
mkdir nano-marketplace && cd nano-marketplace && nano-claude-code
Watch it go to work.
After a few rounds of back-and-forth with the agent:
The Agent Stack: Model, Code, Prompt
Agents are designed to solve tasks—to get from point A to point B. To make that journey smooth, you need three things: a reliable car, a well-paved road, and clear traffic signs so you don’t get lost.
In the world of agentic software:
- The car 🏎️ is the LLM doing the work. A less capable model (e.g.,
claude-3.5-haiku
) will perform nowhere near as well as a top-tier one (e.g.,claude-4-opus
). - The road 🛣️ is your code—it defines every turn and lane change: how tool calls are routed, how data flows, and how errors are handled. If your code is buggy, it’s like trying to drive a Ferrari on a dirt track—nothing runs smoothly.
- The traffic signs 🚧 are the prompt—they whisper the LLM where to go, which tools to use (and when), and what to avoid. With good code and a powerful model, a precise prompt is what separates “meh” agents from great ones.
Hope you enjoyed the post.
THE END.