Edit model card

Windows installation (Watch my youtube video here for more info: )

git lfs install git clone https://huggingface.co/Aitrepreneur/FLUX-Prompt-Generator

Inside the FLUX-Prompt-Generator create a new virtual env python -m venv env env\Scripts\activate

Install the requirements pip install -r requirements.txt

Then run the application python app.py

Inside the app.py file, On line 337: self.groq_client = Groq(api_key="YOUR-GROQ-API-KEY") replace YOUR-GROQ-API-KEY by your API GROQ Key if you wanna use Groq for the text generation.

IF YOU WANT TO USE EVERYTHING LOCALLY:

  1. Download and install ollama: https://ollama.com/
  2. Once Ollama is running in the background, download the Llama 3 8B model by running this command in a new cmd window: ollama run llama3
  3. Once the llama 3 8B model is donwloaded go in the FLux prompter Webui, check the "Use Ollama (local)" checkbox and you are good to go
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .