• Instructions to Ask a Question

    Click on the "Ask a Question" button and select the application for which you would like to ask questions.

    We have 5 different products namely - Pabbly Connect, Pabbly Subscription Billing, Pabbly Email Marketing, Pabbly Form Builder, Pabbly Email Verification.

    The turnaround time is 24 hrs (Business Hours - 10.00 AM to 6.00 PM IST, Except Saturday and Sunday). So your kind patience will be highly appreciated!

    🚀🚀Exclusive Discount Offer

    Just in case you're looking for any ongoing offers on Pabbly, you can check the one-time offers listed below. You just need to pay once and use the application forever -
     

    🔥 Pabbly Connect One Time Plan for $249 (🏆Lifetime Access) -  View offer 

    🔥 Pabbly Subscription Billing One Time Plan for $249 (🏆Lifetime Access) - View offer

OpenAI - Response not received in 25 seconds.

smugo

Member
Hey @smugo

We have checked the issue from the team and got a response that they will consider the decision to fetch the response from OpenAI in 25 seconds and will try to implement a bigger time frame. As of now, there is no ETA for this. We will keep you updated on the status of this conversation.
Thanks for your reply.
An ETA would be really great. And I'm talking about hours or 1-2 days here, since the whole process is really useless otherwise. You have many tutorials online about auto generating content and publishing it with OpenAI. How did you guys do it then?
 

Neeraj

Administrator
Staff member
Thanks for your reply.
An ETA would be really great. And I'm talking about hours or 1-2 days here, since the whole process is really useless otherwise. You have many tutorials online about auto generating content and publishing it with OpenAI. How did you guys do it then?

What is the prompt that you are using to generate the output?

In most cases the 25 seconds limit works perfectly fine unless you are generating a very complex response.

I would have to say that currently we won't be able to wait more than 25 seconds for the response to arrive from an external application.

We have also written a guide on how to better handle the wait time for external application that you can recommend to OpenAI team.

1672809940763.png


We talked internally to the tech team today itself and somehow the limit of 25 seconds is not planned to be increased for now.
 

smugo

Member
What is the prompt that you are using to generate the output?

In most cases the 25 seconds limit works perfectly fine unless you are generating a very complex response.

I would have to say that currently we won't be able to wait more than 25 seconds for the response to arrive from an external application.

We have also written a guide on how to better handle the wait time for external application that you can recommend to OpenAI team.

View attachment 19913

We talked internally to the tech team today itself and somehow the limit of 25 seconds is not planned to be increased for now.

I don't think that the prompt is too complex. But for sure it's more than "keyword". Because that returns pretty useless random text.

It's not true that it works just fine in most cases, as you can tell from the various report here in the forum. Also just imaginge for example a prompt like this:

"create 5 social media posts on about <topic>. Use emotional adjectives, don't make any health promises. Add 3 trending hashtags to the end"
It will end up in 25 second timeouts. It's really not much text to generate... Maybe like 500 tokens. And this would be one of the very easy prompts...

You should really consider increasing the timeout to align with Zapier and other competitors. The whole OpenAI topic is otherwise not useful in Pabbly. Also, why wouldn't you? What's the big deal for Pabbly in increasing it?
 

Neeraj

Administrator
Staff member
I don't think that the prompt is too complex. But for sure it's more than "keyword". Because that returns pretty useless random text.

It's not true that it works just fine in most cases, as you can tell from the various report here in the forum. Also just imaginge for example a prompt like this:

"create 5 social media posts on about <topic>. Use emotional adjectives, don't make any health promises. Add 3 trending hashtags to the end"
It will end up in 25 second timeouts. It's really not much text to generate... Maybe like 500 tokens. And this would be one of the very easy prompts...

You should really consider increasing the timeout to align with Zapier and other competitors. The whole OpenAI topic is otherwise not useful in Pabbly. Also, why wouldn't you? What's the big deal for Pabbly in increasing it?

We have written to OpenAI Team today. We will update the thread as soon as we hear back on the way to optimize the way to use their API.

Hello Team,

I hope this email finds you well.

I am writing to inquire about the possibility of optimizing the response time of your API. We have noticed that the API can take more than 25 seconds to respond in some cases, which makes it difficult for users to run it in automation.

Is it possible to send the request ID immediately in the response when the API is fired so that we can use that request ID to fetch the final response with some delay?

This would allow us to better manage the wait time for our users and improve their experience with our API. Thank you for your time and I look forward to hearing back from you.

Best regards,
 

smugo

Member
Hey @smugo

We have checked the issue from the team and got a response that they will consider the decision to fetch the response from OpenAI in 25 seconds and will try to implement a bigger time frame. As of now, there is no ETA for this. We will keep you updated on the status of this conversation.

Hello,
any progress? I simplified the prompt down to a minimum of "write an article about <keyword>" and 1000 tokens and it still returns a 35 seconds timeout. Very frustrating...
 

omp

Member
We have written to OpenAI Team today. We will update the thread as soon as we hear back on the way to optimize the way to use their API.
I have sent this to OpenAI. Hope the issue will be fixed, thanks much
 

Neeraj

Administrator
Staff member
For a limited time, we have increased the Timeout window for OpenAI Generate Content to 40 seconds. Although this is not an ideal thing to do.

Please note Zapier has the timeout limit of 30 seconds for OpenAI and similar issues are been faced by the users across different platforms.

The ideal way to handle this use-case is something that we have informed to the OpenAI team. We have sent the email but haven't received any reply back from them.

The message sent to the OpenAI team is below. I would recommend users to also send the message below from their end.

Hello Team,

I hope this email finds you well.

I am writing to inquire about the possibility of optimizing the response time of your API. We have noticed that the API can take more than 25 seconds to respond in some cases, which makes it difficult for users to run it in automation.

Is it possible to send the request ID immediately in the response when the API is fired so that we can use that request ID to fetch the final response with some delay?

This would allow us to better manage the wait time for our users and improve their experience with our API. Thank you for your time and I look forward to hearing back from you.

Best regards,
 

rickbaboo

Member
One of the main reason we use zapier is the connection with openAI and the fact that it retries automatically after a failed API call. Right now OpenAI integration in Pabbly is unusable as I cant get a response. We need a workaround or bye bye pabbly.
 

Supreme

Well-known member
Staff member
Temporarily, the Timeout window for OpenAI Generate Content has been increased to 40 seconds, although this is not the preferred solution. It is important to note that Zapier has a timeout limit of 30 seconds for OpenAI and other platforms are encountering similar issues.

We have informed the OpenAI team about the ideal solution for this, but have not received a response.
 

MKG

Member
Temporarily, the Timeout window for OpenAI Generate Content has been increased to 40 seconds, although this is not the preferred solution. It is important to note that Zapier has a timeout limit of 30 seconds for OpenAI and other platforms are encountering similar issues.

We have informed the OpenAI team about the ideal solution for this, but have not received a response.
Why not try to implement the auto-retry option as mentioned by users earlier? This would be a way easier solution to the problem than waiting for OpenAi team to answer...
 

Supreme

Well-known member
Staff member
We have taken note of your concern and will attempt to address it on the platform soon.
 

Supreme

Well-known member
Staff member
Unfortunately, we are unable to provide you with an ETA because it takes several checks to be completed before making any modifications. Following the submission of a specific integration request, you will receive an email notification.
 

Jimbo

Member
Can you use the DELAY module - but I guess if "no response from OPENAI/CHATGPT - then there is no id to link to after waiting
Does anyone have a workaround for OPENAI- Timing out - or are we stuck until PABBLY comes up with a solution?
 

Pabblynaut

Member
Can you use the DELAY module - but I guess if "no response from OPENAI/CHATGPT - then there is no id to link to after waiting
Does anyone have a workaround for OPENAI- Timing out - or are we stuck until PABBLY comes up with a solution?
The delay module does not solve the problem, because the error is that things time out on the OpenAI side. So we need a auto-retry-part-of-flow option.
 

Oleksandr

Member
hey everyone, I started having the same issue for a very simple query: "extract the email", so I am not sure if this is an issue on OpenAI's side or Pabbly's.
 
Maybe another option, since we know it's take time, so after sending the request to open AI,
You can add a option to add a delay, like this if the token is long, we can add 10-15 delay, before wating to the response.
Hope this is clear,
But in this way, if you creat long Token, we can wait even 30 sec before starting the wating,
How do we do this?
 
Top