What is Document Chat?
Document Chat allows you to ask natural language questions directly about individual patents or patent families. It uses an AI large language model to “read” the full text of the patent and some of its metadata (such as title, applicants, and publication details) and provide contextual answers. It does not see other search results, the text of cited patents, or the drawings of the document. However, if descriptions of images or citations are included in the patent text, the AI may be able to answer based on those references.
Think of it as an assistant that helps you understand, summarize, and navigate a patent document more quickly, while still requiring you to validate answers against the source text.
How can I ask good questions?
Good questions are clear, specific, and focused on the document itself. The AI works best when you guide it toward a concrete aspect of the patent.
Specific is better than vague.
Instead of “What is this about?” ask “Summarize the invention in simple terms.”
Instead of “Tell me about the claims,” ask “List the independent claims and summarize their differences.”
Reference the structure of the document.
Questions like “List the main embodiments described in the detailed description” or “Summarize claim 1 and identify supporting paragraphs” produce stronger results.
Focus on one aspect at a time.
Break questions into steps, which you can ask in batch. For example:
- What problem does the invention solve?
- What is the proposed solution?
- What are the benefits of the invention?
Request references.
For defensible results, you can ask the AI to cite a paragraph within: “What embodiments does this document mention? Cite the relevant paragraph and summarize the embodiments.”
Combine, but don’t overload.
You can combine related questions in one message, but very long, hierarchical, or unrelated multi-part questions may confuse the AI. It is better to ask related questions in a structured batch.
What kind of questions can I ask?
Document Chat supports a wide range of query types, including:
- Summarization: “Summarize the invention.” “Explain the invention in simple terms.”
- Problem/solution/benefit: “What problem does the invention solve?” “What is the solution?” “What are the benefits?”
- Technical detail: “List all part names with reference numbers.” “What materials are mentioned in the embodiments?”
- Legal/structural: “Which claims are independent?” “What is the distinguishing feature over prior art mentioned in the patent?”
- Applications and impact: “What applications are mentioned?” “How does this invention impact [x]?”
- Targeted aspects: “Does the patent mention [y]?” “Summarize aspects related to [z].”
You can also experiment with synonyms. For example, if you ask about “artificial intelligence,” the AI may include mentions of “machine learning” if these are treated as equivalent in the text.
Can I ask questions in other languages?
Yes, you can. Document Chat supports multiple languages, but answers are generally more accurate in English.
Can I ask multiple questions at once?
Yes. You can send several questions together in batch, for example:
- What problem does it solve?
- How does it solve it?
- What are the main benefits?
This is useful for efficiency but remember that extremely long or complex multi-part prompts can reduce accuracy.
Can I ask the AI to cite where it found the answer?
Yes. Document Chat is designed to give references where possible. If it does not, you can request: “Cite the relevant paragraph where embodiments are described.” Always double-check, since occasionally the AI may mix up numbering.
Any tips for better answers?
- Provide enough context in your question.
- Ask follow-up questions to go deeper.
- If you notice an incorrect answer, clear the chat before continuing to prevent confusion.
- Use grammatically correct, precise wording.
- Adopt domain-specific terms from the answers to refine further questions.
- Rephrase your question if the first answer is incomplete or unclear.
- Specify the format you want in the answer – e.g. “list as bullet points” or “Summarize in two sentences”
- Build on previous answers to narrow down details instead of starting from scratch each time.
Limitations of Document Chat
Scope of understanding.
The AI only sees the current patent text and metadata. It does not see cited documents, search results, or drawings directly. It may summarize figures or citations only if the text describes them.
Document length.
It works on patents of any length. However, if the document is hundreds of pages long, the AI may not consider all content at once. In such cases, answers are flagged with a warning icon.
Accuracy.
Most answers are accurate, but errors are possible, especially if you ask about information that does not exist in the document. Sometimes the AI may attempt to answer instead of refusing. Validation against the patent text is essential.
Comparisons.
Document Chat cannot directly compare two patents. You may copy-paste another document into the chat for a rough comparison, but this risks confusing the AI. For robust comparisons, use dedicated tools such as Claims Comparison.
Search and cross-document tasks.
The AI cannot perform searches or access other features in Origin or PatBase. It only answers based on the single open document.
Tables and images.
It can process tables represented as text but not scanned image tables. It does not read text embedded in drawings or diagrams.
Consistency of answers.
Repeating the same question may not always yield the same answer. Context, ongoing improvements to the system, and technical randomness can all introduce variation.
Prior art context.
The AI can only reference prior art mentioned in the current patent. It cannot analyse external prior art documents.
Key Takeaway
Document Chat is a powerful tool for understanding patent documents faster and more interactively. It works best when you ask precise, well-structured, document-focused questions and use it as a support for comprehension and navigation, not a substitute for expert review. Always validate key insights against the source text before drawing conclusions.