Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while fetching the file: 404 #2231

Open
prasantpoudel opened this issue Sep 19, 2024 · 1 comment
Open

Error while fetching the file: 404 #2231

prasantpoudel opened this issue Sep 19, 2024 · 1 comment
Assignees
Labels
bug Something isn't working

Comments

@prasantpoudel
Copy link

when i run the model with a big prompt and content then i got the response as Error while fetching the file: 404
then the result.

from g4f.client import Client

client = Client()

content = []
content.append({
"type": "text",
"text": f" text: <website>{website_content}</website> <instruction>{combine_prompt}</instruction>",
})
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": content}],
    max_tokens=2048,
)
print(response.choices[0].message.content)
@prasantpoudel prasantpoudel added the bug Something isn't working label Sep 19, 2024
@kqlio67
Copy link
Contributor

kqlio67 commented Sep 19, 2024

Hey there! I see you're having trouble processing large amounts of text. Here are some suggestions that might help:

  1. Try reducing the content size. Some APIs have limits on the amount of text they can process in a single request.
  2. If reducing the size isn't an option, you can split the text into smaller parts and process them separately. Here's a modified version of your code that does this:
from g4f.client import Client
import textwrap

def split_content(text, max_length=3000):
    return textwrap.wrap(text, max_length, break_long_words=False, replace_whitespace=False)

def process_content(client, model, website_content, combine_prompt, max_tokens=2048):
    full_content = f"<website>{website_content}</website> <instruction>{combine_prompt}</instruction>"
    chunks = split_content(full_content)
    
    all_responses = []
    
    for i, chunk in enumerate(chunks):
        content = [{"type": "text", "text": chunk}]
        
        try:
            response = client.chat.completions.create(
                model=model,
                messages=[{"role": "user", "content": content}],
                max_tokens=max_tokens,
            )
            all_responses.append(response.choices[0].message.content)
            print(f"Processed chunk {i+1}/{len(chunks)}")
        except Exception as e:
            print(f"Error processing chunk {i+1}: {e}")
    
    return "\n".join(all_responses)

# Usage
client = Client()
model = "gpt-4o-mini"

website_content = "Your long website content goes here..."
combine_prompt = "Your instruction or prompt goes here..."

result = process_content(client, model, website_content, combine_prompt)
print("Final result:")
print(result)

This script breaks your content into smaller chunks and processes them separately. It should help avoid errors related to content size.

Currently, in the gpt4free project, there are few providers that support a large context. Most providers have limitations on the amount of text they can process at once. However, this situation may change in the future as more providers are added or existing ones are updated to handle larger inputs. In the meantime, the code example I provided offers a workaround to process large amounts of text with the current limitations.

Remember to adjust the max_length in split_content and max_tokens in process_content based on the specific limitations of the model and provider you're using.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants