bob1029
4 days ago
The ergonomics of this are good in terms of integration mechanics. I wouldn't worry about performance as long as we are in the tens of milliseconds range on reflection/invoke.
I think the biggest concern is that the # of types & methods is going to be too vast for most practical projects. LLM agents fall apart beyond 10 tools or so. Think about the odds of picking the right method out of 10000+, even with strong bias toward the correct path. A lot of the AI integration pain is carefully conforming to the raw nature of the environment so that we don't overwhelm the token budget of the model (or our personal budgets).
I would consider exposing a set of ~3 generic tools like:
SearchTypes
GetTypeInfo
ExecuteScript
This constrains your baseline token budget to a very reasonable starting point each time.I would also consider schemes like attributes that explicitly opt-in methods and POCOs for agent inspection/use.
ddddazed
3 days ago
Hi bob, Thanks for your reply. Yes, reducing tokens and trying to reduce variables to achieve greater determinism in enterprise projects is a fundamental issue.
In my work, we have multi-project solutions, and I understand what you mean. I completely agree. I didn't imagine proposing such a solution to eliminate the architectural creation part, but at this point, that can be done directly in the JSON we send to the model. With reflection, you don't need wrappers. Let's say we also limit the functions. With this architecture, I don't have to write the wrapper for the three functions I choose. I simply create a JSON file that contains three basic ones.
At this point, reflection will do its job.
Can you clarify this further?