I wonder if there would be a way to "critique" a lindy and have it fix itself. Often enough I end up fine tuning the lindy specs but then when I fix one thing another pops up, like playing whack-a-mole. I have a lindy that works great but there's a small issue that has popped up. Would be nice to give it instructions on fixing that one issue specifically without having to modify the base structure of the lindy. For example with my lindy I told it specifically not to reply to emails where people ask clarifying questions (it's been too unpredictable to give it that task so I just have it sending emails). It's been doing great but then out of nowhere it sent a reply trying to be helpful I guess, but in violation of it's rules. Would be nice to be able to teach the lindy to fix a small issue like that. I'm constantly dealing with bugs/issues with Lindy and if it could fix itself with my instructions I would much prefer that way as opposed to continually tweaking the rules. lindy Marvin A.
I will try that though memories seems like an odd way to critique output. I'd rather have an easy option maybe even while viewing a task to click on something it did and say "don't do that" or "When you do this, do it this way" Fine tuning I guess. Memories seems like an odd place/UX for fine tuning.
Not really fine-tuning, more if you know you do not want something to happen across all agent steps / thinking steps then this is one centralized place to put that information
Sure, which makes sense. But I guess my concerns/feedback are more about fine tuning agents
hmmm have you thought about using a knowledge base as a learning log?
configure a KB as a google doc in the cloud and then have the agent write updates to it conditionally
That could maybe work... seems like an odd way of doing it still haha. I want something easier 🙂 like "feedback/fine tuning" while looking at a specific task
then it can take that and reconfigure itself however (using a KB, adding a memory, whatever) to make sure it learns from the feedback
yup!
You are right overall though parker, def a need for a more intuitive / lindy native method of agent fine tuning
especially when there are specific things to point to, like a certain action in a task
right now its just having to tweak the general instructions over and over and when I fix something, another thing freaks out (not always but often)
if it helps this is the specific task/agent I am referring to where it deviated from instructions. It might be how I worded things or forgot to include a detail, but it would be nice to give feedback to the specific issue itself when and where it happened as opposed to trying to figure out how to reword the instructions and where things might have not been clear.
