OpenAI - Response not received in 25 seconds.

smugo

Member
Hey @smugo

We have checked the issue from the team and got a response that they will consider the decision to fetch the response from OpenAI in 25 seconds and will try to implement a bigger time frame. As of now, there is no ETA for this. We will keep you updated on the status of this conversation.
Thanks for your reply.
An ETA would be really great. And I'm talking about hours or 1-2 days here, since the whole process is really useless otherwise. You have many tutorials online about auto generating content and publishing it with OpenAI. How did you guys do it then?
 

Neeraj

Administrator
Staff member
Thanks for your reply.
An ETA would be really great. And I'm talking about hours or 1-2 days here, since the whole process is really useless otherwise. You have many tutorials online about auto generating content and publishing it with OpenAI. How did you guys do it then?

What is the prompt that you are using to generate the output?

In most cases the 25 seconds limit works perfectly fine unless you are generating a very complex response.

I would have to say that currently we won't be able to wait more than 25 seconds for the response to arrive from an external application.

We have also written a guide on how to better handle the wait time for external application that you can recommend to OpenAI team.

1672809940763.png


We talked internally to the tech team today itself and somehow the limit of 25 seconds is not planned to be increased for now.
 

smugo

Member
What is the prompt that you are using to generate the output?

In most cases the 25 seconds limit works perfectly fine unless you are generating a very complex response.

I would have to say that currently we won't be able to wait more than 25 seconds for the response to arrive from an external application.

We have also written a guide on how to better handle the wait time for external application that you can recommend to OpenAI team.

View attachment 19913

We talked internally to the tech team today itself and somehow the limit of 25 seconds is not planned to be increased for now.

I don't think that the prompt is too complex. But for sure it's more than "keyword". Because that returns pretty useless random text.

It's not true that it works just fine in most cases, as you can tell from the various report here in the forum. Also just imaginge for example a prompt like this:

"create 5 social media posts on about <topic>. Use emotional adjectives, don't make any health promises. Add 3 trending hashtags to the end"
It will end up in 25 second timeouts. It's really not much text to generate... Maybe like 500 tokens. And this would be one of the very easy prompts...

You should really consider increasing the timeout to align with Zapier and other competitors. The whole OpenAI topic is otherwise not useful in Pabbly. Also, why wouldn't you? What's the big deal for Pabbly in increasing it?
 

Neeraj

Administrator
Staff member
I don't think that the prompt is too complex. But for sure it's more than "keyword". Because that returns pretty useless random text.

It's not true that it works just fine in most cases, as you can tell from the various report here in the forum. Also just imaginge for example a prompt like this:

"create 5 social media posts on about <topic>. Use emotional adjectives, don't make any health promises. Add 3 trending hashtags to the end"
It will end up in 25 second timeouts. It's really not much text to generate... Maybe like 500 tokens. And this would be one of the very easy prompts...

You should really consider increasing the timeout to align with Zapier and other competitors. The whole OpenAI topic is otherwise not useful in Pabbly. Also, why wouldn't you? What's the big deal for Pabbly in increasing it?

We have written to OpenAI Team today. We will update the thread as soon as we hear back on the way to optimize the way to use their API.

Hello Team,

I hope this email finds you well.

I am writing to inquire about the possibility of optimizing the response time of your API. We have noticed that the API can take more than 25 seconds to respond in some cases, which makes it difficult for users to run it in automation.

Is it possible to send the request ID immediately in the response when the API is fired so that we can use that request ID to fetch the final response with some delay?

This would allow us to better manage the wait time for our users and improve their experience with our API. Thank you for your time and I look forward to hearing back from you.

Best regards,
 

smugo

Member
Hey @smugo

We have checked the issue from the team and got a response that they will consider the decision to fetch the response from OpenAI in 25 seconds and will try to implement a bigger time frame. As of now, there is no ETA for this. We will keep you updated on the status of this conversation.

Hello,
any progress? I simplified the prompt down to a minimum of "write an article about <keyword>" and 1000 tokens and it still returns a 35 seconds timeout. Very frustrating...
 

omp

Member
We have written to OpenAI Team today. We will update the thread as soon as we hear back on the way to optimize the way to use their API.
I have sent this to OpenAI. Hope the issue will be fixed, thanks much
 

Neeraj

Administrator
Staff member
For a limited time, we have increased the Timeout window for OpenAI Generate Content to 40 seconds. Although this is not an ideal thing to do.

Please note Zapier has the timeout limit of 30 seconds for OpenAI and similar issues are been faced by the users across different platforms.

The ideal way to handle this use-case is something that we have informed to the OpenAI team. We have sent the email but haven't received any reply back from them.

The message sent to the OpenAI team is below. I would recommend users to also send the message below from their end.

Hello Team,

I hope this email finds you well.

I am writing to inquire about the possibility of optimizing the response time of your API. We have noticed that the API can take more than 25 seconds to respond in some cases, which makes it difficult for users to run it in automation.

Is it possible to send the request ID immediately in the response when the API is fired so that we can use that request ID to fetch the final response with some delay?

This would allow us to better manage the wait time for our users and improve their experience with our API. Thank you for your time and I look forward to hearing back from you.

Best regards,
 
Top