Skip to main content
Version: DEV

Quick start

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. When integrated with LLMs, it is capable of providing truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.

This quick start guide describes a general process from:

  • Starting up a local RAGFlow server,
  • Creating a knowledge base,
  • Intervening with file parsing, to
  • Establishing an AI chat based on your datasets.

Prerequisites

  • CPU ≥ 4 cores;

  • RAM ≥ 16 GB;

  • Disk ≥ 50 GB;

  • Docker ≥ 24.0.0 & Docker Compose ≥ v2.26.1.

    If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.

Start up the server

This section provides instructions on setting up the RAGFlow server on Linux. If you are on a different operating system, no worries. Most steps are alike.

1. Ensure vm.max_map_count ≥ 262144:

vm.max_map_count. This value sets the maximum number of memory map areas a process may have. Its default value is 65530. While most applications require fewer than a thousand maps, reducing this value can result in abmornal behaviors, and the system will throw out-of-memory errors when a process reaches the limitation.

RAGFlow v0.8.0 uses Elasticsearch for multiple recall. Setting the value of vm.max_map_count correctly is crucial to the proper functioning of the Elasticsearch component.

1.1. Check the value of vm.max_map_count:

$ sysctl vm.max_map_count

1.2. Reset vm.max_map_count to a value at least 262144 if it is not.

$ sudo sysctl -w vm.max_map_count=262144
WARNING

This change will be reset after a system reboot. If you forget to update the value the next time you start up the server, you may get a Can't connect to ES cluster exception.

1.3. To ensure your change remains permanent, add or update the vm.max_map_count value in /etc/sysctl.conf accordingly:

vm.max_map_count=262144
  1. Clone the repo:

    $ git clone https://github.com/infiniflow/ragflow.git
  2. Build the pre-built Docker images and start up the server:

    Running the following commands automatically downloads the dev version RAGFlow Docker image. To download and run a specified Docker version, update RAGFLOW_VERSION in docker/.env to the intended version, for example RAGFLOW_VERSION=v0.8.0, before running the following commands.

    $ cd ragflow/docker
    $ chmod +x ./entrypoint.sh
    $ docker compose up -d

The core image is about 9 GB in size and may take a while to load.

  1. Check the server status after having the server up and running:

    $ docker logs -f ragflow-server

    The following output confirms a successful launch of the system:

        ____                 ______ __
    / __ \ ____ _ ____ _ / ____// /____ _ __
    / /_/ // __ `// __ `// /_ / // __ \| | /| / /
    / _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ /
    /_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/
    /____/

    * Running on all addresses (0.0.0.0)
    * Running on http://127.0.0.1:9380
    * Running on http://x.x.x.x:9380
    INFO:werkzeug:Press CTRL+C to quit

    If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a network anomaly error because, at that moment, your RAGFlow may not be fully initialized.

  2. In your web browser, enter the IP address of your server and log in to RAGFlow.

WARNING

With the default settings, you only need to enter http://IP_OF_YOUR_MACHINE (sans port number) as the default HTTP serving port 80 can be omitted when using the default configurations.

Configure LLMs

RAGFlow is a RAG engine, and it needs to work with an LLM to offer grounded, hallucination-free question-answering capabilities. For now, RAGFlow supports the following LLMs, and the list is expanding:

  • OpenAI
  • Azure-OpenAI
  • Gemini
  • Groq
  • Mistral
  • Bedrock
  • Tongyi-Qianwen
  • ZHIPU-AI
  • MiniMax
  • Moonshot
  • DeepSeek-V2
  • Baichuan
  • VolcEngine
note

RAGFlow also supports deploying LLMs locally using Ollama or Xinference, but this part is not covered in this quick start guide.

To add and configure an LLM:

  1. Click on your logo on the top right of the page > Model Providers:

    add llm

    Each RAGFlow account is able to use text-embedding-v2 for free, an embedding model of Tongyi-Qianwen. This is why you can see Tongyi-Qianwen in the Added models list. And you may need to update your Tongyi-Qianwen API key at a later point.

  2. Click on the desired LLM and update the API key accordingly (DeepSeek-V2 in this case):

    update api key

    Your added models appear as follows:

    added available models

  3. Click System Model Settings to select the default models:

    • Chat model,
    • Embedding model,
    • Image-to-text model.

    system model settings

Some models, such as the image-to-text model qwen-vl-max, are subsidiary to a specific LLM. And you may need to update your API key to access these models.

Create your first knowledge base

You are allowed to upload files to a knowledge base in RAGFlow and parse them into datasets. A knowledge base is virtually a collection of datasets. Question answering in RAGFlow can be based on a particular knowledge base or multiple knowledge bases. File formats that RAGFlow supports include documents (PDF, DOC, DOCX, TXT, MD), tables (CSV, XLSX, XLS), pictures (JPEG, JPG, PNG, TIF, GIF), and slides (PPT, PPTX).

To create your first knowledge base:

  1. Click the Knowledge Base tab in the top middle of the page > Create knowledge base.

  2. Input the name of your knowledge base and click OK to confirm your changes.

    You are taken to the Configuration page of your knowledge base.

    knowledge base configuration

  3. RAGFlow offers multiple chunk templates that cater to different document layouts and file formats. Select the embedding model and chunk method (template) for your knowledge base.

IMPORTANT

Once you have selected an embedding model and used it to parse a file, you are no longer allowed to change it. The obvious reason is that we must ensure that all files in a specific knowledge base are parsed using the same embedding model (ensure that they are being compared in the same embedding space).

You are taken to the Dataset page of your knowledge base.

  1. Click + Add file > Local files to start uploading a particular file to the knowledge base.

  2. In the uploaded file entry, click the play button to start file parsing:

    file parsing

    When the file parsing completes, its parsing status changes to SUCCESS.

:::alert NOTE

  • If your file parsing gets stuck at below 1%, see FAQ 4.3.
  • If your file parsing gets stuck at near completion, see FAQ 4.4 :::

Intervene with file parsing

RAGFlow features visibility and explainability, allowing you to view the chunking results and intervene where necessary. To do so:

  1. Click on the file that completes file parsing to view the chunking results:

    You are taken to the Chunk page:

    chunks

  2. Hover over each snapshot for a quick view of each chunk.

  3. Double click the chunked texts to add keywords or make manual changes where necessary:

    update chunk

NOTE

You can add keywords to a file chunk to increase its relevance. This action increases its keyword weight and can improve its position in search list.

  1. In Retrieval testing, ask a quick question in Test text to double check if your configurations work:

    As you can tell from the following, RAGFlow responds with truthful citations.

    retrieval test

Set up an AI chat

Conversations in RAGFlow are based on a particular knowledge base or multiple knowledge bases. Once you have created your knowledge base and finished file parsing, you can go ahead and start an AI conversation.

  1. Click the Chat tab in the middle top of the mage > Create an assistant to show the Chat Configuration dialogue of your next dialogue.

    RAGFlow offer the flexibility of choosing a different chat model for each dialogue, while allowing you to set the default models in System Model Settings.

  2. Update Assistant Setting:

    • Name your assistant and specify your knowledge bases.
    • Empty response:
      • If you wish to confine RAGFlow's answers to your knowledge bases, leave a response here. Then when it doesn't retrieve an answer, it uniformly responds with what you set here.
      • If you wish RAGFlow to improvise when it doesn't retrieve an answer from your knowledge bases, leave it blank, which may give rise to hallucinations.
  3. Update Prompt Engine or leave it as is for the beginning.

  4. Update Model Setting.

  5. RAGFlow also offers conversation APIs. Hover over your dialogue > Chat Bot API to integrate RAGFlow's chat capabilities into your applications:

    chatbot api

  6. Now, let's start the show:

    question1

    question2