Thank you for your great work!
I couldn't find code in the repository that processes the retrieved content; the training data under LLMs/data_id appears to be identical to that used in ChatKBQA. Could you please point me to where the retrieved knowledge (entities, relations, subgraphs) is actually handled or transformed before being fed into the model? Thank you!