Dear Pabbly folks,
Realistically, you'd need to increase your OpenAI timeout to 300 seconds. Their models routinely get overloaded, and they are slow. It often takes up to 5 minutes to output a long text. This doesn't mean there's an error on their side; they are just slow. OpenAI modules will likely be some of the most requested within Pabbly. And I know the timeout limit is much lower for most other APIs. But this is a new reality. Having OpenAI error out because you're timing the module out after 25 seconds will cause A LOT of user frustration. Hence, for OpenAI, I'd jack up the timeout to 300 seconds (at least).
Also, we desperately need auto-retry conditions. This could be useful for other modules, but for OpenAI it's essential. Their endpoints error out routinely when their models are over-run. And having to stop and re-enable the entire scenario because their modules error out is a huge waste of operations. Hence, we need to build robust automation scenarios with OpenAI where we can set something like "auto-retry three times in case of errors".
Eugene