Claude Code Meetup 2 Oslo

Oslo, Norway

LSP Integration

What it is: Language Server Protocol (LSP) gives Claude Code IDE-level code intelligence - it can jump to definitions, find references, see errors, and understand your code semantically instead of just searching text.

Why it matters: 900x faster than text-based search (50ms vs 45 seconds). Claude now "sees" your code like an IDE does.

Supported languages: Python, TypeScript, Go, Rust, Java, C/C++, C#, PHP, Kotlin, Ruby, HTML/CSS

Try it:

# Enable LSP (if not auto-enabled)
export ENABLE_LSP_TOOL=1

# Then ask Claude things like:
# "Go to the definition of handleAuth"
# "Find all references to UserService"
# "What errors are in this file?"

Resources: Medium guide | Setup guide


Teleport & Cloud

What it is: Move your Claude session between local terminal and cloud (claude.ai/code). Work continues even when your laptop is closed.

Two commands:

CommandWhat it does
& your promptSends task to cloud, runs in background
/teleport or /tpPull a cloud session back to local

Try it:

# Send a task to run in the cloud
& "Refactor the authentication module and add tests"

# Later, pull it back locally
/teleport

# Or from command line
claude --teleport

Requirements: Clean git state, same repo, Pro/Max plan or API tokens

Pro tip: Share Session ID with teammates for async pair programming!

Resources: Teleportation.dev | Claude Code on web docs


Tool Search (MCP Context Fix)

What it is: MCP tools used to load ALL their definitions upfront, eating your context. Tool Search loads them on-demand instead.

The problem: 50+ MCP tools could consume 66K tokens before you even typed anything (1/3 of your context!)

The solution: Tool Search reduced this from ~77K to ~8.7K tokens - an 85% reduction.

Try it:

# Auto-enables when MCP tools exceed 10% of context
# Customize threshold:
export ENABLE_TOOL_SEARCH=auto:5

# Or disable if needed:
export ENABLE_TOOL_SEARCH=false

Requirements: Sonnet 4+ or Opus 4+ (Haiku doesn't support it)

Resources: VentureBeat article | Tool search docs


Diff View

What it is: See code changes in a proper side-by-side diff viewer in your IDE instead of raw text in the terminal.

Try it:

# In Claude Code, configure diff tool
/config
# Set diff tool to "auto"

# Or connect to VS Code from external terminal
/ide

Features:

  • Side-by-side old vs new code
  • Accept/reject individual hunks
  • Checkpoint created before changes (easy rollback)

Resources: IDE diff setup guide | VS Code docs


Tab + "Yes/but" Workflow

What it is: Quick keyboard workflow for reviewing and modifying Claude's suggestions.

How it works:

  1. Press Tab to toggle thinking on/off
  2. Press Shift+Tab to cycle modes: Edit → Auto-accept → Plan
  3. Type "Yes" to accept, or "Yes, but..." to accept with modifications

Modes:

ModeWhat it does
Edit (default)Asks permission before changes
Auto-accept (1x Shift+Tab)Writes files without asking
Plan (2x Shift+Tab)Creates plans without code changes

Pro tip: Auto-accept + git commits as checkpoints is a comfortable default workflow.

Resources: Interactive mode docs | Cheatsheet


Fleet Operations

What it is: Run multiple Claude instances in parallel like commanding a fleet. Boris Cherny (Claude Code creator) runs 10-20 agents simultaneously.

The workflow:

  1. Number your terminal tabs 1-5
  2. Use system notifications to know when Claude needs input
  3. Run 5-10 additional sessions on claude.ai/code
  4. Act as "fleet commander" - feels more like Starcraft than coding!

Try it:

# Terminal 1
claude "Build the user authentication"

# Terminal 2
claude "Write tests for the API"

# Terminal 3
claude "Update documentation"

# Use & prefix for cloud sessions
& "Refactor database queries"
& "Add error handling to services"

Tools: claude-flow | wshobson/agents

Resources: Parallel agents guide | VentureBeat article


Ask Claude to Write Skills

What it is: Instead of writing slash commands manually, just ask Claude to create them for you.

Try it:

# Ask Claude to create a skill
"Create a /review skill that checks code for security issues and suggests improvements"

# Claude will create:
# .claude/skills/review/SKILL.md

Skill structure:

---
name: review
description: Reviews code for security issues
---
# Instructions for Claude
Review the code for:
1. Security vulnerabilities
2. Performance issues
3. Best practices

Locations:

  • Global: ~/.claude/commands/ or ~/.claude/skills/
  • Project: .claude/commands/ or .claude/skills/

Adapt existing skills: Don't start from scratch - take official/community skills and customize them!

# Example: Customize the frontend-design skill

## Original skill from Anthropic
# github.com/anthropics/claude-code/tree/main/plugins/frontend-design

## Your customized version (.claude/skills/frontend/SKILL.md)
---
name: frontend
description: Build frontend with our design system
---

Use the base frontend-design approach, but:
- Always use our Tailwind config from tailwind.config.js
- Follow our component patterns in src/components/
- Use our color palette: primary (#3B82F6), secondary (#10B981)
- Import from our UI library: @company/ui-kit
- Match the style of existing components

Resources: Skills docs | Skill Factory


Vertical Slice Workflow

What it is: A multi-agent slash command that chains specialized agents together for end-to-end feature development. Each phase uses a different "persona" agent.

The pipeline:

[rag-architect] → [connector-dev] → [test-engineer] → [observability]

Example command (.claude/commands/vertical-slice-workflow.md):

---
name: vertical-slice-workflow
description: End-to-end feature development using rag-architect,
  connector-dev, test-engineer, and observability agents
---

# Vertical Slice Workflow

## Phase 1: Design (rag-architect)
"Design the IndexDocument feature..."

## Phase 2: Implement (connector-dev)
"Build the connector for the designed feature..."

## Phase 3: Test (test-engineer)
"Create integration tests..."

## Phase 4: Instrument (observability-eng)
"Add OpenTelemetry..."

Why commands over ad-hoc prompting?

  • Consistency across team & sessions
  • Commands are kickstarted explicitly (/command)
  • Reusable and version-controlled in git

Try it:

# Invoke the workflow
/vertical-slice-workflow "Build user notifications feature"

/standup Command

What it is: A slash command that analyzes your recent git activity and tells you what to say in your daily standup.

Example command (.claude/commands/standup.md):

---
name: standup
description: Generate daily standup update from recent git activity
---

# Standup Generator

Look at my git activity from the last 24 hours and generate a standup update:

1. Run `git log --author=$(git config user.email) --since="24 hours ago" --oneline`
2. Run `git diff --stat HEAD~5` to see recent changes
3. Check any open PRs with `gh pr list --author @me`

Format the output as:
- **Yesterday:** What I completed
- **Today:** What I'm planning to work on
- **Blockers:** Any issues or dependencies

Keep it concise (3-4 bullet points max).

Try it:

# Before your standup meeting
/standup

Pro tip: Add to your morning routine - run it first thing and paste into Slack!


Loops in CLAUDE.md

What it is: Add iterative prompts to your CLAUDE.md that make Claude keep improving until a goal is met.

Example CLAUDE.md snippet:

## Development Loop
When working on features:
1. Implement the feature
2. Run tests
3. If tests fail, fix and repeat from step 2
4. If tests pass, check coverage
5. If coverage < 80%, add tests and repeat
6. Update TASKS.md with progress

The "Ralph Wiggum" technique:

# Simple continuous loop
while true; do
  claude --dangerously-skip-permissions \
    "Continue working on TASKS.md items. Update progress in TASKS.md"
done

Pro tip: Anthropic says Claude's output improves significantly with 2-3 iterations.

Resources: Running Claude in a loop | Ralph Wiggum technique | Anthropic best practices


Hooks to Never Stop

What it is: Use Stop hooks to prevent Claude from exiting until your conditions are met.

How it works: Stop hooks can return {"decision": "block"} or exit code 2 to force Claude to continue.

Example hook (.claude/settings.json):

{
  "hooks": {
    "Stop": [{
      "command": "bash -c 'if grep -q TODO src/; then echo \"Still have TODOs\" && exit 2; fi'",
      "timeout": 30000
    }]
  }
}

Exit codes:

CodeBehavior
0Allow stop
2Block stop, show message to Claude

Warning: Can cause infinite loops if not controlled!

Plugin: Ralph plugin - automatically re-invokes prompts when Claude tries to exit.

Resources: Hooks docs | Hooks mastery


Structured Outputs & BAML

What it is: BAML (Boundary AI Markup Language) is a domain-specific language for getting reliable structured data (JSON, typed objects) from LLMs.

Why use it: Works even without native tool-calling APIs, fixes LLM output errors in milliseconds (not seconds), works on Day 1 of new model releases.

Install:

pip install baml-py
# or
npm install @boundaryml/baml

Example (.baml file):

function ExtractUser(text: string) -> User {
  client "anthropic/claude-sonnet-4-20250514"
  prompt #"
    Extract user information from: {{text}}

    Return a User object.
  "#
}

class User {
  name string
  email string
  age int?
}

Usage:

from baml_client import b

result = b.ExtractUser("John Doe, john@example.com, 30 years old")
print(result.name)  # "John Doe"
print(result.email) # "john@example.com"

Benefits:

  • Type-safe outputs across Python/TS/Ruby/Java/C#/Rust/Go
  • SAP algorithm beats GPT-4o structured outputs
  • Millisecond error correction vs second re-prompting

Resources: BAML docs | GitHub | BAML vs Instructor comparison


Common Mistakes

MistakeDo Instead
CLAUDE.md as docs dumpUse .claude/rules/ & extract docs to other files
Skipping Plan Mode for complex tasksAlways plan before complex implementation
Vague descriptionsAdd trigger phrases
Omitting toolsAlways whitelist tools
No verificationRun tests after implementing
Not exploring with subagents earlyUse subagents for research

Team practices:

  • Keep CLAUDE.md in git - everyone contributes
  • Record mistakes so Claude won't repeat them

Example - organizing rules instead of dumping in CLAUDE.md:

.claude/
├── rules/
│   ├── testing.md      # "Always run pytest after changes"
│   ├── style.md        # "Use single quotes for strings"
│   └── security.md     # "Never commit .env files"
└── docs/
    ├── api-reference.md
    └── architecture.md

Example - trigger phrases in descriptions:

---
name: deploy
description: Deploy to production. Triggers on "ship it", "deploy", "push to prod"
---

Agent Tool Selection

What it is: When creating new agents/subagents, don't give them access to all tools - only the ones they actually need.

Why it matters:

  • Reduces context overhead (each tool definition costs tokens)
  • Prevents agents from doing unintended actions
  • Faster execution with focused toolset
  • Better security boundaries

Example - research agent only needs read tools:

---
name: research-agent
description: Research codebase patterns
allowed_tools:
  - Read
  - Grep
  - Glob
  - WebSearch
# NOT: Write, Edit, Bash, etc.
---

Example - deploy agent needs specific tools:

---
name: deploy-agent
description: Deploy to production
allowed_tools:
  - Bash
  - Read
# NOT: Edit, Write (shouldn't modify code during deploy)
---

Rule of thumb: Start minimal, add tools only when the agent actually needs them.


Vercel AI Gateway Integration

What it is: Route Claude Code requests through Vercel AI Gateway for centralized usage tracking, observability, and provider failover.

Why use it:

  • Track usage and spend in one place
  • View traces in Vercel observability
  • Failover between providers
  • Centralize controls for models

Setup:

# First logout
claude logout

# Set environment variables
export ANTHROPIC_BASE_URL="https://ai-gateway.vercel.sh"
export ANTHROPIC_AUTH_TOKEN="your-vercel-token"
export ANTHROPIC_API_KEY=""  # Must be empty string

# Now use Claude Code normally
claude "Your prompt here"

Bonus - Vercel Sandbox with Agent SDK: Run Claude's Agent SDK in Vercel Sandbox for long-running processes that execute commands, manage files, and maintain conversational state.

Resources: Vercel AI Gateway docs | Changelog | Sandbox guide


Meta-Prompting: Ask Claude to Improve Itself

One tip to rule them all: Ask Claude to /init, learn, evolve, self-check, and write its own slash commands - it can help improve your entire workflow, not just your code.