I think we need a training-wheel mode, (especially if loops are used), that if the agent consumes more than X credits (let the agenticist declare it) in a 1 minute period, the agent is shut down, and then print some best practices, like debug loops with 2-3 parallel operations max, turn on having the agent 'prompt' to do a step, things like that. Something to keep people who are still learning from going overboard, and in the rare case there is a Lindy bug, it would help Lindy.ai not burn a lot of tokens with Anthropic and have to credit the user a bunch of tokens to keep the customer from giving up.