Cognis AI Engine
A reasoning engine with 13 sub-packages, not a chatbot. Cognis classifies intent, gathers evidence from 11 pillars, and constructs answers through an iterative reasoning loop.
What Cognis Is
Cognis is a bounded context in the InteliG backend with 13 sub-packages: thread, routing, reasoning, actions, session, artifact, facilitation, generallm, productknowledge, preference, intent, agent, and shared. It is the intelligence layer that sits between your questions and your engineering data.
When you ask a question, Cognis does not search a database and return rows. It runs a multi-pillar reasoning loop: classify intent, route to the right execution strategy, fetch evidence from relevant data pillars, evaluate whether it has enough information, and synthesize a grounded response. If the evidence is insufficient, it loops back and fetches more.
It has its own state machine, working memory, prior knowledge store, and iterative reasoning cycle. Think of it as the difference between a search engine and an analyst. A search engine finds documents. Cognis finds truth.
Three Interaction Modes
Cognis has three modes, each with a different purpose and execution path.
Ask
Analytical questions about your org data. Routes through the full reasoning loop with evidence gathering and multi-iteration synthesis.
Chat
Freeform conversation. General-purpose LLM without org data access. For writing, brainstorming, editing, and questions that have nothing to do with your codebase.
Facilitated
Guided artifact creation. Cognis walks you through building strategic artifacts (vision, themes, initiatives, roadmaps) with structured facilitation questions and approval workflows.
8 Ask Categories
In Ask mode, Cognis organizes questions into 8 categories, each mapping to specific intelligence pillars. These are not arbitrary groupings -- they represent the dimensions of engineering health that matter.
Performance and Health
Engagement, throughput, cycle time, and quality signals.
People and Teams
Contributor patterns, team dynamics, and engagement risk.
Strategy and Execution
Initiative portfolio, strategic alignment, investment focus, and roadmap health.
Delivery and Shipping
DORA metrics, deployment frequency, lead time, and delivery success.
Codebase and Architecture
Commit quality, classification, AI attribution, and risk scoring.
Knowledge
Meeting summaries, decisions, and action item tracking.
Finance
Engineering ROI, cost per PR, and investment analysis.
Cognis Actions
Create strategic artifacts through guided facilitation.
How Cognis Reasons
Every question goes through a five-stage pipeline. This is not a prompt-and-pray architecture. Cognis reasons deliberately.
- → Intent Classification — Keyword-based priority classifier determines what kind of question you asked: insight request, trend analysis, follow-up, comparative analysis, proactive recommendation, or general knowledge. Deterministic, fast, debuggable.
- → Pillar Routing — The semantic reasoning pipeline extracts entities (contributors, teams, repos, initiatives), relationships, and metric domains from your question, then maps them to the right data sources. Entity-based routing, not keyword matching.
- → Evidence Gathering — Fetches data from one or more of the 11 intelligence pillars. A sufficiency evaluator checks whether the evidence resolves the question. If not, the loop iterates -- fetching from additional pillars until it has enough or hits the iteration limit.
- → Reasoning Loop — A state machine that cycles through initialization, unknown identification, fetching, sufficiency evaluation, and synthesis. Up to 3 iterations depending on epistemic intent. Five execution strategies handle different question types: Execution (data questions), Discovery (capability questions), ProductGuidance (how-to questions), Facilitation (artifact creation), and GeneralLM (non-org questions).
- → Streamed Response — The synthesized answer streams token-by-token via Server-Sent Events. Every response is grounded in the evidence collected, not hallucinated from training data.
If the semantic pipeline detects no org-specific entities in your question, Cognis falls back to the general LLM instead of hitting the data layer. "Explain Kubernetes architecture" does not touch your org data.
Scoping
Every Cognis thread is scoped. The scope determines what data Cognis can access and reason over.
- Organization Org-wide intelligence across all teams, repositories, and contributors. The default scope.
- Team Filtered to a specific team and their repositories. Useful for team leads and managers.
- Contributor Individual contributor patterns, output, engagement, and growth trajectory.
- Personal Your own activity. For ICs who want to understand their own work patterns.
Scopes also carry an optional date range. If unset, Cognis defaults to the last 30 days. All data access is tenant-isolated with Row Level Security -- you only see what belongs to your organization.
The 11 Evidence Pillars
Cognis reasons over 11 distinct data pillars. Each pillar represents a dimension of engineering health and answers a core question about your organization.
Engagement
Who is building? Contributor activity, engagement scores, work patterns, at-risk detection.
Throughput
How much is shipping? PR merge rates, commit frequency, delivery velocity.
Cycle Time
How fast does work move? PR lifecycle, review times, bottleneck identification.
Quality
How good is the output? Bug fix rates, PR quality, code review metrics, defect rate.
Cost & ROI
What does it cost? Contributor ROI, cost per PR, investment allocation.
Change Detection
What shifted? Trend indicators, anomaly detection, change alerts.
Codebase
How healthy is the code? Repository health, branch hygiene, ownership risks, bus factor.
Commit Intelligence
What is each commit doing? Quality scores, classifications (bug/feature/refactor), effort estimates, AI attribution.
Strategy
Are we aligned? Initiative status, theme progress, alignment scores, investment focus.
Delivery
Are we shipping? Deployment frequency, DORA metrics, release cadence, environment stats.
Knowledge
What do we know? Meeting summaries, decisions, action items, intelligence snapshots.
The semantic pipeline can query multiple pillars in a single reasoning cycle. A question like "How do contributor engagement patterns affect initiative velocity?" pulls evidence from Engagement, Strategy, and Throughput simultaneously.
Epistemic Reasoning
Cognis does not pretend to know everything. Every response carries epistemic markers that separate what it knows from what it does not.
- Facts What the data definitively shows. Grounded in analyzed commits, PRs, deployments, and patterns.
- Unknowns What the data does not cover. Gaps in context, missing signals, or areas outside the analysis window.
- Confidence How certain Cognis is in its conclusions. High confidence means strong signal coverage. Low confidence means the answer is directional, not definitive.
Beyond these markers, Cognis classifies your epistemic intent -- whether you are exploring state, confirming a hypothesis, comparing entities, seeking guidance, or refining a prior answer. This classification determines how many reasoning iterations to run and how aggressively to gather evidence. A "What should I focus on?" question gets up to 3 iterations. A "Why did you say that?" gets 1.
This is a design choice, not a limitation. Leaders make better decisions when they know what they don't know.
Actions
Cognis does not just answer questions. It can take action -- with your explicit approval.
When you ask Cognis to create something (a vision, a theme, an initiative, a roadmap), it enters a facilitated session. It asks structured clarifying questions to gather the required fields, presents a proposal for your review, and executes only after you approve. The entire lifecycle is event-sourced and auditable.
- → Session Model — Each action creation starts a CognisSession tied to your thread. Only one active session per thread. While a session is active, all messages route to the artifact engine.
- → Artifact Model — A mutable working surface (draft state) where Cognis builds the artifact as it collects your inputs through facilitation.
- → Propose-Approve-Execute — Actions follow a strict lifecycle: proposed (with risk level and approval policy), approved or rejected by you, then executed. No mutation happens without your consent.
Memory
Cognis has a three-layer memory architecture. It does not rely on stuffing your entire conversation history into an LLM prompt.
- → Evidence Memory — The data fetched for a single message. Ephemeral. Lives only during the reasoning cycle and is discarded after synthesis. This is the raw material Cognis reasons over.
- → Working Memory — Persisted per-thread across turns. Tracks the focus entity (who or what you are discussing), the last pillar queried, and high-confidence facts established in prior turns. This is how Cognis handles follow-ups -- if you ask about Team Alpha and then say "What about their throughput?", working memory carries the context forward.
- → Knowledge Store — Cross-thread memory with 7-30 day expiry. After each reasoning cycle, notable findings (bus factor risks, at-risk contributors, dormant repos, stalled initiatives) are extracted and persisted. When you start a new conversation, Cognis loads relevant prior findings so it can say "Prior analysis confirms..." instead of re-deriving everything from scratch.