The ergonomics of this are good in terms of integration mechanics. I wouldn't worry about performance as long as we are in the tens of milliseconds range on reflection/invoke.
I think the biggest concern is that the # of types & methods is going to be too vast for most practical projects. LLM agents fall apart beyond 10 tools or so. Think about the odds of picking the right method out of 10000+, even with strong bias toward the correct path. A lot of the AI integration pain is carefully conforming to the raw nature of the environment so that we don't overwhelm the token budget of the model (or our personal budgets).
I would consider exposing a set of ~3 generic tools like:
SearchTypes
GetTypeInfo
ExecuteScript
This constrains your baseline token budget to a very reasonable starting point each time.
I would also consider schemes like attributes that explicitly opt-in methods and POCOs for agent inspection/use.
The ergonomics of this are good in terms of integration mechanics. I wouldn't worry about performance as long as we are in the tens of milliseconds range on reflection/invoke.
I think the biggest concern is that the # of types & methods is going to be too vast for most practical projects. LLM agents fall apart beyond 10 tools or so. Think about the odds of picking the right method out of 10000+, even with strong bias toward the correct path. A lot of the AI integration pain is carefully conforming to the raw nature of the environment so that we don't overwhelm the token budget of the model (or our personal budgets).
I would consider exposing a set of ~3 generic tools like:
This constrains your baseline token budget to a very reasonable starting point each time.I would also consider schemes like attributes that explicitly opt-in methods and POCOs for agent inspection/use.
Sansung f05