Hi,
what is the right procedure/workflow for a continuous/follow-up task flow with OpenAI using Pabbly Connect?
I have a step App: OpenAI > Action event: Generate Content. There I am asking OpenAI to write an article ("You are a consultant who helps business owners with XXX Write a long, detailed professional article for a magazine based on the title you will find at the end of these instructions... etc.")
When I limit the response to 256 tokens, I get an immediate response. But when I increase the number of tokens to the max 2048, 95% of the time there is a time-out and OpenAI does not respond within the 40 seconds required by Pabbly Connect. (Even though I have a paid OpenAI account.)
The only way I can think of circumventing this problem is to get OpenAI to produce the article in multiple parts (Prompt: Break up the article into 5 sections consisting of a maximum of 256 tokens each.)
The first response of OpenAI would give me the first section of the article... but how would I go in Pabbly Connect about asking for the sections 2, 3, 4 and 5? That is to follow up on the previous request/response?
I could do the same using ChatGPT, but there again, how would I - after receiving the first sections - "ask" ChatGPT to continue and provide the remaining sections?
(When this happens in ChatGPT interface itself - when it stops writing before the article is finished - I give it the prompt "continue" and it does finish it off. But how would I do the same in Pabbly Connect so that I get all fo the 5 sections and can then connect them together into one long article...?)
Thanks.
BTW Pabbly is great, saves a lot of time!
what is the right procedure/workflow for a continuous/follow-up task flow with OpenAI using Pabbly Connect?
I have a step App: OpenAI > Action event: Generate Content. There I am asking OpenAI to write an article ("You are a consultant who helps business owners with XXX Write a long, detailed professional article for a magazine based on the title you will find at the end of these instructions... etc.")
When I limit the response to 256 tokens, I get an immediate response. But when I increase the number of tokens to the max 2048, 95% of the time there is a time-out and OpenAI does not respond within the 40 seconds required by Pabbly Connect. (Even though I have a paid OpenAI account.)
The only way I can think of circumventing this problem is to get OpenAI to produce the article in multiple parts (Prompt: Break up the article into 5 sections consisting of a maximum of 256 tokens each.)
The first response of OpenAI would give me the first section of the article... but how would I go in Pabbly Connect about asking for the sections 2, 3, 4 and 5? That is to follow up on the previous request/response?
I could do the same using ChatGPT, but there again, how would I - after receiving the first sections - "ask" ChatGPT to continue and provide the remaining sections?
(When this happens in ChatGPT interface itself - when it stops writing before the article is finished - I give it the prompt "continue" and it does finish it off. But how would I do the same in Pabbly Connect so that I get all fo the 5 sections and can then connect them together into one long article...?)
Thanks.
BTW Pabbly is great, saves a lot of time!