BeetleB
8 days ago
Feedback 1: The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
As a non-Cursor user who does AI programming, there is nothing there to make me want to try it out.
Feedback 2: I feel any new agentic AI tool for programming should have a comparison against Aider[1] which for me is the tool to benchmark against. Can you give a compelling reason to use this over Aider? Don't just say "VSCode" - I'm sure there are extensions for VSCode that work with Aider.
As an example of the questions I have:
- Does it have something like Aider's repomap (or better)?
- To what granularity can I limit the context?
andrewpareles
8 days ago
Thanks for the feedback. We'll definitely add a feature list. To answer your question, yes - we support Cursor's features (quick edits, agent mode, chat, inline edits, links to files/folders, fast apply, etc) using open source and openly-available models (for example, we haven't trained our own autocomplete model, but you can bring any autocomplete model or "FIM" model).
We don't have a repomap or codebase summary - right now we're relying on .voidrules and Gather/Agent mode to look around to implement large edits, and we find that works decently well, although we might add something like an auto-summary or Aider's repomap before exiting Beta.
Regarding context - you can customize the context window and reserved amount of token space for each model. You can also use "@ to mention" to include entire files and folders, limited to the context window length. (you can also customize the model's reasoning ability, think tags to parse, tool use format (gemini/openai/anthropic), FIM support, etc).
throwup238
7 days ago
An important Cursor feature that no one else seems to have implemented yet is documentation indexing. You give it a base URL and it crawls and generates embeddings for API documentation, guides, tutorials, specifications, RFCs, etc in a very language agnostic way. That plus an agent tool to do fuzzy or full text search on those same docs would also be nice. Referring to those @docs in the context works really well to ground the LLMs and eliminate API hallucinations
Back in 2023 one of the cursor devs mentioned [1] that they first convert the HTML to markdown then do n-gram deduplication to remove nav, headers, and footers. The state of the art for chunking has probably gotten a lot better though.
[1] https://forum.cursor.com/t/how-does-docs-crawling-work/264/3
mapmap
7 days ago
The continue.dev plugin for Visual Studio Code provides documentation indexing. You provide a base URL and a tag. The plugin then scrapes the documentation and builds a RAG index. This allows you to use the documentation as context within chat. For example, you could ask @godotengine what is a sprite?
conartist6
7 days ago
So this is why everything is going behind Anubis then?
GreenWatermelon
7 days ago
Nah, Anubis combats systematic Scraping of the web by data scrapers, not actual user agents.
conartist6
7 days ago
A scraper in this case is the agent of the user. Doesn't make it not a scraper that can and will get trapped.
lgiordano_notte
7 days ago
Cursor’s doc indexing is acc one of the few AI coding features that feels like it saves time. Embedding full doc sites, deduping nav/header junk, then letting me reference @docs inline actually improves context grounding instead of guessing APIs.
steveharman
7 days ago
Just use the Context7 MCP ? Actually I'm assuming Void supports MCP.
gesman
7 days ago
Context7 is missing lots of info pieces from the repos it indexing and getting overbloated with similar sounding repos, which is becoming confusing for LLM's.
Aeroi
7 days ago
can you elaborate on how context7 handles document indexing or web crawling. If i connect to the mcp server, will it be able to crawl websites fed to it?
andrewpareles
7 days ago
Agreed - this is one of the better solutions today.
andrewpareles
7 days ago
This is a good point.We've stayed away from documentation assuming that it's more of a browser agent task, and I agree with other commenters that this would make a good MCP integration.
I wonder if the next round of models trained on tool-use will be good at looking at documentation. That might solve the problem completely, although OSS and offline models will need another solution. We're definitely open to trying things out here, and will likely add a browser-using docs scraper before exiting Beta.
RobinL
7 days ago
I agree that on the face of it this is extremely useful. I tried using it for multiple libraries and it was a complete failure though, it failed to crawl fairly standard mkdocs and sphynx sites. I guess it's better for the 'built in' ones that they've pre-indexed
throwup238
7 days ago
I use it mostly to index stuff like Rust docs on docs.rs and rendered mdbooks. The RAG is hit or miss but I haven’t had trouble getting things indexed.
satvikpendem
7 days ago
Do you support @Docs?
SafeDusk
7 days ago
I've used both Cursor and Aider but I've always wanted something simple that I have full control on, if not just to understand how they work. So I made a minimal coding agent (with edit capability) that is fully functional using only seven tools: read, write, diff, browse, command, ask, and think.
I can just disable `ask` tool for example to have it easily go full autonomous on certain tasks.
Have a look at https://github.com/aperoc/toolkami to see if it might be useful for you.
larusso
7 days ago
Will check this out. I like to have a bit more control over my stack if possible.
satvikpendem
7 days ago
> The README really needs more details. What does it do/not do? Don't assume people have used Cursor. If it is a Cursor alternative, does it support all of Cursor's features?
That's all in the website, not the README, but yes a bulleted list or identical info from the site would work well.
_345
8 days ago
Am I the only one that has had bad experiences with aider? For me each time I've tried it, I had to wrestle with and beg the AI to do what I wanted it to do, almost always ending in me just taking over and doing it myself.
If nearly everytime I use it to accomplish something it gets it 40-85% correct and I have to go in to fix the other 60-15%, what is the point? It's as slow as hand writing code then, if not slower, and my flow with Continue is simply better:
1. CTRL L block of code 2. Ask a question or give a task 3. I read what it says and then apply the change myself by CTRL C and then tweaking the one or two little things it inevitably misunderstood about my system and its requirements
CuriouslyC
7 days ago
Aider is quite configurable, you need to look at the leaderboard and copy one of the high performing model/config setups. Additionally, you need to autoload files such as the readme and coding guidelines for your project.
Aider's killer features are integration of automated lint/typecheck/test and fix loops with git checkpointing. If you're not setting up these features you aren't getting the full value proposition from it.
larusso
7 days ago
Never used the tool. But it seems both aider and cursor are not at their strongest out of the box? I read similar thing about cursor and doing custom configuration so it picks up coding guidelines etc etc. Is there some kind of agreed best practice standard that is documented or just try and error best practices from users sharing these?
CuriouslyC
7 days ago
Aider's leaderboard is a baseline "best practice" for model/edit format/mode selection. Beyond that, it's basically whatever you think are best practices in engineering and code style, which you should capture in documents that can serve double duty both for AI and for human contributors. Given that a lot of this stuff is highly contentious it's really up to you as to pick and choose what you prefer.
attentive
7 days ago
That depends on models you use and your prompts.
Use gemini-2.5pro or sonnet3.5/3.7 or gpt-4.1
Be as specific and detailed in your prompts as you can. Include the right context.
dingnuts
7 days ago
and what do you do if you value privacy and don't want to share everything in your project with silicon valley, or you don't want to spend $8/hr to watch Claude do your hobby for you?
I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
If I have to write a prompt as long as Claude's system prompt in order to get reliable edits, I'm not sure I've saved any time at all.
At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
wredcoll
7 days ago
I haven't used local models. I don't have the 60+gb of vram to do so.
I've tested aider with gemini2.5 with prompts as basic as 'write a ts file with pupeteer to load this url, click on button identified by x, fill in input y, loop over these urls' and it performed remarkably well.
Llm performance is 100% dependent on the model you're using so you ca hardly generalize from a small model you run locally on a cpu.
mp5
7 days ago
Local models just aren't there yet in terms of being able to host locally on your laptop without extra hardware.
We're hoping that one of the big labs will distill an ~8B to ~32B parameter model that performs SOTA benchmarks! This would be huge both in cost and probably make it reasonable for most people to code with agents in parallel.
troupo
7 days ago
> or use the "right" prompt. Give some examples.
There's no such thing as a "right prompt". It's all snake oil. https://dmitriid.com/prompting-llms-is-not-engineering
wkat4242
7 days ago
This is exactly the issue I have with copilot in office. It doesn't learn from my style so I have to be very specific how I want things. At that point it's quicker to just write it myself.
Perhaps when it can dynamically learn on the go this will be better. But right now it's not terribly useful.
mp5
7 days ago
I really wonder why dynamic learning hasn't been explored more. It would be a huge moat for the labs (everyone would have to host and dynamically train their own model with a major lab). Seems like it would make the AI way smarter too.
BeetleB
7 days ago
> At least you seem to be admitting that aider is useless with local models. That's certainly my experience.
I don't see it as an admission. I'd wager 99% of Aider users simply haven't tried local models.
Although I would expect they would be much worse than Sonnet, etc.
> I'm getting really tired of AI advocates telling me that AI is as good as the hype if I just pay more (say, fellow HNer, your paycheck doesn't come from those models, does it?), or use the "right" prompt. Give some examples.
Examples? Aider is a great tool and much (probably most) of it is written by AI.
NewsaHackO
7 days ago
Is this post just you yelling at the wind? What does this have to do with the post you replied to?
user
8 days ago
user
7 days ago
f4ncyp4ntz
17 hours ago
[dead]