While efforts are underway to completely eliminate llm hallucinations, interim solutions such as Factool are essential to alleviate the issue.

Factool has been designed to identify and rectify factual inaccuracies in the output generated by LLMs, thus enhancing the quality and credibility of the generated text.

Currently 4 tasks are supported by Factool:

  • knowledge-based QA: Factool detects factual errors in knowledge-based QA.
  • code generation: Factool detects execution errors in code generation.
  • mathematical reasoning: Factool detects calculation errors in mathematical reasoning.
  • scientific literature review: Factool detects hallucinated scientific literatures.