Ability of chatbots to provide deduced and summarized answers (Q&A)
Context: The PoC evaluates the use of LLM and AI techniques to enhance chatbots and search portals, enabling them to deliver precise and concise answers rather than simply presenting a list of search results. In this case the questions and answering capabilities were demonstrated for Publications Office portal and Publio (The Intelligent Search assistant of the Publications Office of the European Union’s)
Benefits of deduced and summarized answers (Q&A)
Architecture
Technical setup
- Knowledge Base is used to create a model, by processing the available documents
- A vectorization is applied to each document in the model and results are stored in the vector DB
- RAG utilizes vectorized information in vector DB to finetune LLM answers
User query
- User performs a query
- LLM uses RAG to find the best suitable answers (including sources) in the vector DB
- Answer is generated and provided to the user, considering vectors match between the question and possible answers. Potentially multiple answers styles can be provided. Ex. Simple answer, Advance answer, Expert answer, etc.
Technical setup
Documents from selected sources (EU Whoiwho, EU Publications, EU Law in Force) are ingested into the Data processing unit, which vectorize the available documents, by generate embeddings to be stored in the vector DB (for example Elasticsearch or Qdrant).
User query The user performs a query through the OP Portal search or OP Portal Publio (OP’s enterprise applications). A search for the best suited answer is made using the RAG, considering the vectors available in the vector DB. A prompt and relevant sources are provided to a LLM (for example OpenAI GPT 4 -3.5, Meta LLama, Anthropic CLAUDE), that considering the Query, the prompt and the relevant sources, create an answer to be provided to the user. Potentially multiple answers styles can be provided. Ex. Simple answer, Advance answer, Expert answer, etc.