实验目的
之前利用llm构造触发器的方法效果还不错,命中率能达到1/5左右。现在有一个新的方法,具体见前置准备章节。最终的目的是利用这种新的方法提高实体识别的命中率。而我现在要做的是构造出数据集。
前置准备
论文精读
本次主要参考:AlignRE: An Encoding and Semantic Alignment Approach for Zero-Shot Relation Extraction,
之前利用llm构造触发器的方法效果还不错,命中率能达到1/5左右。现在有一个新的方法,具体见前置准备章节。最终的目的是利用这种新的方法提高实体识别的命中率。而我现在要做的是构造出数据集。
本次主要参考:AlignRE: An Encoding and Semantic Alignment Approach for Zero-Shot Relation Extraction,
The purpose of this task is to construct a set of triggers for a relational extraction model. Simply put, it tells the model which triggers correspond to which relationships.
Start by testing scenarios most relevant to your use case. See if your chatbot can reliably navigate discussions with limited human intervention.
Generate semantic embedding for each new conversation, add the message body to a vector store for retrieval, query the vector store for relevant messages to fill in the LLM context.
Deep learning models can handle semantics by vectorizing natural language.
Node: Documents will be parsed into nodes.
Response Synthesis: This is the process of merging the retrieved nodes into a response output.
When reading long articles, your focus naturally shifts from one word to another, depending on the context.
The mechanism mimic this behavior, allowing models to selectively concentrate on specific elements of the input data while ignoring others.
在检索过程中,同时使用多种检索方式,综合选取候选文档,然后将多种检索结果进行融合,得到最终的检索结果。