bandelero
Member
Hey Pabbly folks,
I know there's an error handler on the way, which will, hopefully, help with OpenAI erroring out due to model overload.
Yet, very often, OpenAI works well, but it's just slow. And if you need to produce a large piece of text (which you cannot chunk into separate requests), the 40-sec limit is not enough.
I know OpenAI currently doesn't offer a way to handle requests in an async way (e.g. Pabbly makes a call to OpenAI, gets an execution id, and then pings a webhook when it's done). And they may never get to it given the popularity of their services.
Yet, we need to be able to use OpenAI with Pabbly. Hence, I am asking Pabbly team to revisit the timeout for OpenAI modules and make the equal to what Make has - e.g. 300 seconds.
E
I know there's an error handler on the way, which will, hopefully, help with OpenAI erroring out due to model overload.
Yet, very often, OpenAI works well, but it's just slow. And if you need to produce a large piece of text (which you cannot chunk into separate requests), the 40-sec limit is not enough.
I know OpenAI currently doesn't offer a way to handle requests in an async way (e.g. Pabbly makes a call to OpenAI, gets an execution id, and then pings a webhook when it's done). And they may never get to it given the popularity of their services.
Yet, we need to be able to use OpenAI with Pabbly. Hence, I am asking Pabbly team to revisit the timeout for OpenAI modules and make the equal to what Make has - e.g. 300 seconds.
E