Frequently Asked Questions

About the Platform

What does TeraContext.AI do?

TeraContext.AI is an AI-powered pre-construction platform that automates the workflow from RFP ingestion to GC proposal assembly. You upload your spec books and drawing sets, the AI classifies every section against industry-standard WBS taxonomies, and you use the platform to generate scope packages, manage subcontractor bids, and assemble your proposal response.


What file types can I upload?

PDF, DOCX, XLSX, PPTX, Markdown, HTML, CSV, and images (PNG, JPG, TIFF). Most spec books come as PDFs. Upload spec volumes as Specification type and drawing sets as Drawing type — this controls which AI pipeline runs on each document.


How long does processing take?

Processing time depends on document size and type:

You can upload additional documents at any time, even while others are processing. The Processing tab shows real-time progress through all 7 phases: PDF Extraction, Table Extraction, Section Splitting, Embedding, WBS Classification, Graph Building, and Cross-Ref Validation.


What WBS taxonomies do you support?

TeraContext.AI ships with 10 industry-standard WBS taxonomies:

You can also clone any standard taxonomy and customize it — add, rename, delete, or reorganize codes to match your firm’s internal classification system.


What project types do you support?

Multi-Family, Office, Municipal, Parking, Retail, Hotel, Strip Mall, Mixed Use, and Other. Select the project type when creating a new project.


Can I create custom scope packages?

Yes. You can auto-generate scope packages by division (one package per division that has classified sections), then customize from there — move sections between packages, rename packages, split or merge them, or create new ones from scratch. Each package can be exported as CSV, PDF, Word, or Markdown.


How does bid analysis work?

When a subcontractor submits their bid, you can record the amount directly. For deeper analysis, upload the sub’s bid response document (PDF or DOCX). The AI analyzes it using keyword extraction and LLM comparison against the scope package, and reports:

The Compare tab shows all bids for a scope package side-by-side with lowest, highest, and average amounts.


How accurate is the AI classification?

Classification accuracy depends on document quality and formatting. The system uses two methods:

Each classification gets a confidence score. You can bulk-confirm all classifications above 90% confidence (typically 70-80% of sections), then manually review the remaining edge cases. The Verify All with LLM button sends each classification for a second AI opinion, flagging discrepancies.

The human review step ensures accuracy before scope packages are created — the AI does the heavy lifting, but your estimators approve every classification.


Getting Started

How do I get started?

TeraContext.AI is currently available through our Early Access program. Contact us to request access. We’ll schedule a demo using your own spec book so you can see the full pipeline — upload through proposal assembly — with real project data.


Is this production-ready?

TeraContext.AI is in Early Access (v0.1.0). The core pipeline is functional — upload, processing, classification, scope packaging, bid management, drawing analysis, query, and proposal assembly all work end-to-end. We’re working closely with a select group of general contractors to refine the platform based on real-world pre-construction workflows.


How much does it cost?

Contact us for Early Access pricing. We’re working with initial customers to establish pricing that reflects the value delivered to estimating teams.


Security & Data

Is my data secure?

Yes. Your documents are processed securely:

We take project document confidentiality seriously — construction specifications and bid information are sensitive, and our platform is designed accordingly.


Understanding the Technology

What is a context window?

A context window is the maximum amount of text (measured in tokens) that a large language model can process in a single interaction — the model’s “working memory.” Modern LLMs have expanded from 4K tokens to 128K, even 1M+ tokens. However, construction project documentation often exceeds even these limits, and costs scale with context length.


Why can’t I just use a model with a larger context window?

While larger context windows help, they face practical limitations:

Cost: Processing costs scale with context length. A 1M token query can cost 10-100x more than a 10K token query.

Quality Degradation: Accuracy often degrades with very long inputs — the “lost in the middle” problem where models struggle to attend to information buried deep in context.

Document Size: Construction spec books with drawings, addenda, and supporting documents can reach 10M+ tokens — well beyond any current context window.

TeraContext.AI uses Retrieval-Augmented Generation (RAG) and GraphRAG to work with documents of any size efficiently — retrieving only relevant sections rather than loading entire documents.


What is RAG (Retrieval-Augmented Generation)?

RAG allows LLMs to work with information beyond their context window by retrieving only relevant chunks when needed:

  1. Documents are split into manageable sections
  2. Each section is converted to a searchable vector (embedding)
  3. When you ask a question, the system finds the most relevant sections
  4. Only those sections are sent to the LLM with your query

This is how the Query tab works — you ask a question, the system retrieves relevant spec sections and drawing content, and the LLM generates an answer with citations.


What is GraphRAG?

GraphRAG extends RAG by building a knowledge graph that captures relationships between concepts, entities, and document sections. Instead of just finding similar text, it understands how specs, drawings, standards, and codes connect to each other.

In TeraContext.AI, the Graph Building phase of processing creates these cross-reference relationships — enabling the system to link drawing references to spec sections and map dependencies across divisions.


What is RAPTOR / multi-layer summarization?

RAPTOR (Recursive Abstractive Processing for Tree-Organized Retrieval) builds hierarchical understanding of documents — from high-level division overviews down to specific section details. This enables both broad questions (“What’s the project scope?”) and precise queries (“What concrete PSI is required for footings?”) to be answered effectively.


Still Have Questions?

Contact Us See How It Works View Use Cases