HiQ Cortex
中文 Open Chat

Solutions · LCA dataset authoring

From field measurement to published dataset. Every DQI score justified.

Cortex Cowork guides the full dataset authoring workflow inside HiQ Editor — metadata, flow entry, mass-balance validation, DQI scoring, and review submission — and writes every modeling decision into the project record before anything is published.

The workflow

§ I

Five stages. One dataset record.

Publishing a background LCA dataset involves a fixed sequence of decisions — from what the dataset represents, to how its flows balance, to whether the data quality scores are defensible. Cortex handles the mechanics of each stage; the authoring decisions stay with the practitioner.

01

Dataset setup

Create a new dataset or open an existing one in HiQ Editor. Cortex confirms the basics — name, region, reference product, reference flow, system boundary, and time range — and determines data source type (primary measurement, literature, industry estimate), which governs how DQI is scored.

02

Flow entry

Add all inputs (materials, energy, auxiliaries) and outputs (products, co-products, emissions, waste). Cortex tracks unit consistency and flags flows that look out of proportion before the mass balance check runs.

03

Validation

Single-calculation validation runs inside HiQ Editor. Cortex checks mass balance (inputs = outputs + losses) and compares the result magnitude against comparable published datasets. Either check failing stops the workflow — the issue is presented with its likely causes before the user decides how to proceed.

04

DQI scoring

All five Pedigree Matrix dimensions — reliability, completeness, temporal correlation, geographic correlation, technical correlation — are scored against objective facts from the data source, not estimated. Cortex cites the source information the user provided when writing the scoring justification.

05

Review submission

Before submitting, Cortex runs a final checklist: metadata complete, flows entered, single calculation passed, documentation attached, sources cited, DQI scored. Any failing item blocks submission. The submission package includes a dataset summary paragraph and the one or two most uncertain points — so the reviewer does not have to reconstruct the authoring decisions.

HiQ Editor integration

§ II

Cortex operates HiQ Editor. The dataset lives in your account.

HiQ Editor is HiQ's LCA background database authoring SaaS — the platform where background datasets are created, validated, and submitted for publication to HiQLCD. Cortex Cowork connects to HiQ Editor directly and operates it: creating and editing datasets, adding flows, running single calculations, and submitting the final package for review.

The dataset lives in the user's HiQ Editor account throughout. Cortex does not hold a copy. At every stage — metadata entry, flow editing, DQI scoring — the changes are written to the dataset record in HiQ Editor, not to a local file that has to be uploaded at the end.

Cross-session continuity works the same way as in the product carbon footprint workflow: the dataset ID, finalized metadata, flow entry state, and any pitfalls encountered are recorded at the end of each session. The next session reads them first and continues from where the last one stopped.

Mandatory checkpoints

§ III

Four situations where Cortex stops before continuing.

A background dataset that fails a reviewer's checks — or worse, one that passes but carries a silent data quality problem — is expensive to correct after publication. Cortex is configured to surface these problems during authoring, not after submission.

  1. § 01

    Mass balance failure

    Input mass does not equal output mass plus losses. Cortex pauses and lists every flow so the user can identify the gap — missing water vapor, missing co-products, or a unit conversion error are the common causes.

  2. § 02

    Magnitude anomaly

    Single-calculation result differs from comparable datasets by an order of magnitude. Cortex stops, names what is off and the most likely cause, and asks the user whether the data is wrong or the model is wrong before continuing.

  3. § 03

    DQI scoring requires evidence

    Every Pedigree Matrix dimension must be scored from the source record, not estimated. If the justification cannot be grounded in what the user provided, Cortex asks for the missing source detail.

  4. § 04

    Pre-submission checklist

    Metadata complete, flows entered, single calculation passed, documentation attached, sources cited, DQI scored — all six must pass. Any gap is named before submission is triggered.

DQI scored from evidence. Mass balance verified. Submitted once both pass.

The submission package

§ IV

A reviewer can follow the authoring decisions without contacting the author.

The submission package contains: complete metadata (name, region, reference product and flow, system boundary, time range, technical description); all input and output flows with their source basis; DQI scores for all five Pedigree Matrix dimensions with scoring justifications that cite the original source information; a single-calculation result confirming the dataset's output at the reference flow; and full source documentation (DOIs, measurement reports, industry standards).

On submission, Cortex generates a covering note: a one-paragraph dataset summary, the one or two most uncertain points in the data, and what the reviewer should focus on. The reviewer does not have to reconstruct the authoring reasoning from the dataset record alone.

The project log captures the full authoring history: decisions made across multiple sessions, pitfalls encountered and how they were resolved, alternative data sources considered and rejected. A dataset that goes through multiple review rounds has a record of what changed between rounds and why.

Primary data. Defensible scores. A record the reviewer can follow.

Cortex Cowork runs locally. HiQ Editor holds the dataset. Every session's decisions accumulate in a project log that travels with the project folder.