• TurdBurgler@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    39 minutes ago

    While it’s possible to see gains in complex problems through brute force, learning more about prompt engineering is a powerful way to save time, money, tokens and frustration.

    I see a lot of people saying, “I tried it and it didn’t work,” but have they read the guides or just jumped right in?

    For example, if you haven’t read the claude code guide, you might have never setup mcp servers or taken advantage of slash commands.

    Your CLAUDE.md might be trash, and maybe you’re using @file wrong and blowing tokens or biasing your context wrong.

    LLMs context windows can only scale so far before you start seeing diminishing returns, especially if the model or tools is compacting it.

    1. Plan first, using planning modes to help you, decomposition the plan
    2. Have the model keep track of important context externally (like in markdown files with checkboxes) so the model can recover when the context gets fucked up

    https://www.promptingguide.ai/

    https://www.anthropic.com/engineering/claude-code-best-practices

    There are community guides that take this even further, but these are some starting references I found very valuable.