This retriever microservice is a highly efficient search service designed for handling and retrieving embedding vectors. It operates by receiving an embedding vector as input and conducting a similarity search against vectors stored in a VectorDB database. Users must specify the VectorDB's URL and the index name, and the service searches within that index to find documents with the highest similarity to the input vector.
The service primarily utilizes similarity measures in vector space to rapidly retrieve contentually similar documents. The vector-based retrieval approach is particularly suited for handling large datasets, offering fast and accurate search results that significantly enhance the efficiency and quality of information retrieval.
Overall, this microservice provides robust backend support for applications requiring efficient similarity searches, playing a vital role in scenarios such as recommendation systems, information retrieval, or any other context where precise measurement of document similarity is crucial.
To start the retriever microservice, you must first install the required python packages.
pip install -r requirements.txt
model=BAAI/bge-base-en-v1.5
volume=$PWD/data
docker run -d -p 6060:80 -v $volume:/data -e http_proxy=$http_proxy -e https_proxy=$https_proxy --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.5 --model-id $model
Health check the embedding service with:
curl 127.0.0.1:6060/embed \
-X POST \
-d '{"inputs":"What is Deep Learning?"}' \
-H 'Content-Type: application/json'
Please refer to this readme.
export TEI_EMBEDDING_ENDPOINT="http://${your_ip}:6060"
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
export RETRIEVER_COMPONENT_NAME="OPEA_RETRIEVER_ELASTICSEARCH"
python opea_retrievers_microservice.py
export EMBED_MODEL="BAAI/bge-base-en-v1.5"
export ES_CONNECTION_STRING="http://localhost:9200"
export INDEX_NAME=${your_index_name}
export TEI_EMBEDDING_ENDPOINT="http://${your_ip}:6060"
export RETRIEVER_COMPONENT_NAME="OPEA_RETRIEVER_ELASTICSEARCH"
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
cd ../../../
docker build -t opea/retriever:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/retrievers/src/Dockerfile .
To start a docker container, you have two options:
- A. Run Docker with CLI
- B. Run Docker with Docker Compose
You can choose one as needed.
docker run -d --name="retriever-elasticsearch" -p 7000:7000 --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e ES_CONNECTION_STRING=$ES_CONNECTION_STRING -e INDEX_NAME=$INDEX_NAME -e TEI_EMBEDDING_ENDPOINT=${TEI_EMBEDDING_ENDPOINT}
-e HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} opea/retriever:latest
cd ../deployment/docker_compose
export service_name="retriever-elasticsearch"
docker compose -f compose.yaml up ${service_name} -d
curl http://localhost:7000/v1/health_check \
-X GET \
-H 'Content-Type: application/json'
To consume the Retriever Microservice, you can generate a mock embedding vector of length 768 with Python.
export your_embedding=$(python -c "import random; embedding = [random.uniform(-1, 1) for _ in range(768)]; print(embedding)")
curl http://${your_ip}:7000/v1/retrieval \
-X POST \
-d "{\"text\":\"What is the revenue of Nike in 2023?\",\"embedding\":${your_embedding}}" \
-H 'Content-Type: application/json'