SporeDB API Reference¶
Core¶
sporedb.SporeDB ¶
SporeDB(data_root: str | Path = './sporedb_data', *, endpoint: str | None = None, api_key: str | None = None)
Primary entry point for SporeDB operations.
Composes storage, ingestion, analytics, export, and query layers behind a single high-level API so scientists never interact with internal store objects directly.
Parameters:
-
data_root(str | Path, default:'./sporedb_data') –Path to local data directory. Defaults to
"./sporedb_data". -
endpoint(str | None, default:None) –Cloud API endpoint URL. If provided, operates in cloud mode.
-
api_key(str | None, default:None) –API key for cloud authentication. Required when endpoint is set.
Raises:
-
ValueError–If endpoint is provided without api_key.
Example
with SporeDB("./my_data") as db: ... batch = db.create_batch("CHO-Run-001", strain="CHO-K1") ... result = db.import_csv("telemetry.csv", "CHO-Run-001") ... df = db.get_telemetry(result.batch_id)
Source code in src/sporedb/client.py
align ¶
Align multiple batch runs by phase boundary for comparison.
Detects phases for each batch, then aligns them by elapsed time from the exponential phase boundary.
Parameters:
-
batch_ids(list[UUID]) –List of batch UUIDs to align.
-
signal(str, default:'OD600') –Telemetry variable used for phase detection and alignment. Defaults to
"OD600".
Returns:
-
A(DataFrame) –class:
pandas.DataFramewith aligned time-series data indexed -
DataFrame–by elapsed hours from the exponential phase boundary.
Source code in src/sporedb/client.py
close ¶
compute_metrics ¶
compute_metrics(batch_id: UUID) -> list[BatchMetrics]
Compute derived bioprocess metrics for a batch.
Runs phase detection first, then calculates kinetic parameters (growth rate, productivity, yields) for each detected phase.
Parameters:
-
batch_id(UUID) –UUID of the batch to analyze.
Returns:
-
list[BatchMetrics]–A list of :class:
BatchMetrics, one per detected phase.
Source code in src/sporedb/client.py
create_batch ¶
create_batch(name: str, *, strain: str | None = None, media: str | None = None, scale_liters: float | None = None, operator: str | None = None, tags: list[str] | None = None, inoculation: datetime | None = None) -> Batch
Create a new batch.
Constructs a Batch model internally from keyword arguments and
persists it via the batch store.
Parameters:
-
name(str) –Human-readable batch identifier (e.g.
"CHO-Run-001"). -
strain(str | None, default:None) –Organism strain name.
-
media(str | None, default:None) –Growth media description.
-
scale_liters(float | None, default:None) –Bioreactor working volume in liters.
-
operator(str | None, default:None) –Name of the operator running the batch.
-
tags(list[str] | None, default:None) –Optional list of free-form tags for categorization.
-
inoculation(datetime | None, default:None) –Inoculation timestamp (timezone-aware).
Returns:
-
Batch–The newly created :class:
Batchwith a generatedbatch_id.
Example
batch = db.create_batch( ... "CHO-Run-001", strain="CHO-K1", scale_liters=5.0 ... )
Source code in src/sporedb/client.py
create_golden_profile ¶
create_golden_profile(batch_ids: list[UUID], variables: list[str], signal: str = 'OD600', metadata: dict[str, Any] | None = None) -> GoldenBatchProfile
Create a golden batch reference profile from aligned runs.
Aligns the given batches and computes mean/std trajectories across the specified variables.
Parameters:
-
batch_ids(list[UUID]) –UUIDs of the reference batches to include.
-
variables(list[str]) –Telemetry variable names to include in the profile.
-
signal(str, default:'OD600') –Variable used for phase-based alignment. Defaults to
"OD600". -
metadata(dict[str, Any] | None, default:None) –Optional metadata dict stored with the profile.
Returns:
-
A(GoldenBatchProfile) –class:
GoldenBatchProfilewith mean and standard deviation -
GoldenBatchProfile–trajectories.
Raises:
-
NotImplementedError–If called in cloud mode.
Source code in src/sporedb/client.py
delete_batch ¶
Delete a batch.
Parameters:
-
batch_id(UUID) –UUID of the batch to delete.
Returns:
-
bool–Trueif the batch existed and was deleted,Falseotherwise.
Source code in src/sporedb/client.py
detect_phases ¶
detect_phases(batch_id: UUID, signal: str = 'OD600', min_size: int = 10) -> list[PhaseAnnotation]
Run PELT changepoint detection on a batch's telemetry.
Uses the ruptures library with an RBF kernel cost function to
identify changepoints in the specified signal and classify the
resulting segments as growth phases.
Parameters:
-
batch_id(UUID) –UUID of the batch to analyze.
-
signal(str, default:'OD600') –Telemetry variable to analyze. Defaults to
"OD600". -
min_size(int, default:10) –Minimum segment length for PELT. Defaults to
10.
Returns:
-
list[PhaseAnnotation]–A list of :class:
PhaseAnnotationobjects describing the -
list[PhaseAnnotation]–detected growth phases (lag, exponential, stationary, decline).
Example
phases = db.detect_phases(batch_id) for p in phases: ... print(f"{p.phase_type.value}: {p.start_ts} - {p.end_ts}")
Source code in src/sporedb/client.py
detect_phases_online ¶
detect_phases_online(batch_id: UUID, signal: str = 'OD600', *, hazard_rate: float = 1 / 250, threshold: float = 0.5) -> list[PhaseAnnotation]
Run Bayesian Online Changepoint Detection on a batch.
Uses BOCPD (Adams & MacKay 2007) for real-time / streaming-style phase detection. Results are persisted via PhaseStore.
Parameters:
-
batch_id(UUID) –UUID of the batch to analyze.
-
signal(str, default:'OD600') –Telemetry variable to analyze. Defaults to
"OD600". -
hazard_rate(float, default:1 / 250) –Prior probability of a changepoint at each step. Defaults to
1/250. -
threshold(float, default:0.5) –Posterior probability threshold for declaring a changepoint. Defaults to
0.5.
Returns:
-
list[PhaseAnnotation]–A list of :class:
PhaseAnnotationobjects.
Raises:
-
NotImplementedError–If called in cloud mode.
Source code in src/sporedb/client.py
export ¶
Export batch data in the specified format.
Parameters:
-
batch_id(UUID) –Batch to export.
-
format(str, default:'csv') –"csv","parquet", or"arrow". -
output_path(str | Path | None, default:None) –If given, write to file and return
None.
Returns:
-
bytes | None–Serialized bytes, or
Nonewhen output_path is provided.
Source code in src/sporedb/client.py
get_assay ¶
Return assay measurements for a batch as a pandas DataFrame.
Parameters:
-
batch_id(UUID) –UUID of the batch to retrieve assay data for.
Returns:
-
A(DataFrame) –class:
pandas.DataFramewith assay measurement rows.
Source code in src/sporedb/client.py
get_batch ¶
get_batch(batch_id: UUID) -> Batch | None
Retrieve a batch by its ID, or None if not found.
Parameters:
-
batch_id(UUID) –UUID of the batch to retrieve.
Returns:
-
The(Batch | None) –class:
Batchif found, otherwiseNone.
Source code in src/sporedb/client.py
get_telemetry ¶
Return telemetry data for a batch as a pandas DataFrame.
Parameters:
-
batch_id(UUID) –UUID of the batch to retrieve telemetry for.
Returns:
-
A(DataFrame) –class:
pandas.DataFramewith columnsts,variable, -
DataFrame–value, andunit.
Source code in src/sporedb/client.py
get_unified_view ¶
Return combined telemetry + assay data for a batch.
Parameters:
-
batch_id(UUID) –UUID of the batch.
Returns:
-
A(DataFrame) –class:
pandas.DataFramewith telemetry and assay data merged -
DataFrame–via an ASOF JOIN on timestamp.
Raises:
-
NotImplementedError–If called in cloud mode.
Source code in src/sporedb/client.py
import_csv ¶
import_csv(file_path: str | Path, batch_name: str, inoculation_ts: datetime | None = None) -> ImportResult
Import a CSV file into SporeDB.
Creates a batch automatically and returns an :class:ImportResult.
Parameters:
-
file_path(str | Path) –Path to the CSV file on disk.
-
batch_name(str) –Human-readable name for the new batch.
-
inoculation_ts(datetime | None, default:None) –Optional inoculation timestamp (timezone-aware).
Returns:
-
An(ImportResult) –class:
ImportResultwith row count, column mappings, and timing.
Raises:
-
NotImplementedError–If called in cloud mode.
Example
result = db.import_csv("telemetry.csv", "CHO-Run-001") print(f"Imported {result.rows_imported} rows")
Source code in src/sporedb/client.py
import_excel ¶
import_excel(file_path: str | Path, batch_name: str, inoculation_ts: datetime | None = None) -> ImportResult | list[ImportResult]
Import an Excel file into SporeDB.
Creates a batch automatically and returns an :class:ImportResult
(or a list when multiple sheets are present).
Parameters:
-
file_path(str | Path) –Path to the Excel file on disk.
-
batch_name(str) –Human-readable name for the new batch.
-
inoculation_ts(datetime | None, default:None) –Optional inoculation timestamp (timezone-aware).
Returns:
-
An(ImportResult | list[ImportResult]) –class:
ImportResultfor single-sheet files, or a list of -
ImportResult | list[ImportResult]–class:
ImportResultwhen the workbook contains multiple sheets.
Raises:
-
NotImplementedError–If called in cloud mode.
Source code in src/sporedb/client.py
list_batches ¶
list_batches() -> list[Batch]
Return all batches.
Returns:
-
list[Batch]–A list of all :class:
Batchrecords in the store.
Source code in src/sporedb/client.py
predict_pat ¶
Run a PAT soft-sensor and return predictions merged with telemetry.
Retrieves telemetry, extracts the sensor's input variables,
calls sensor.predict(), and returns the original telemetry
DataFrame with predicted rows appended.
Parameters:
-
batch_id(UUID) –UUID of the batch to predict on.
-
sensor(SoftSensor) –A :class:
SoftSensormodel instance.
Returns:
-
A(DataFrame) –class:
pandas.DataFramecombining original telemetry with -
DataFrame–predicted values.
Raises:
-
NotImplementedError–If called in cloud mode.
Source code in src/sporedb/client.py
query ¶
Execute a bioprocess DSL query and return results as a DataFrame.
The query string is parsed via the Lark-based grammar, compiled to parameterized DuckDB SQL, and executed against the storage engine.
Parameters:
-
dsl_query(str) –A SporeDB DSL query string (PromQL-style syntax).
Returns:
-
A(DataFrame) –class:
pandas.DataFramewith the query results.
Example
df = db.query("SELECT OD600 FROM batch WHERE name = 'CHO-Run-001'")
Source code in src/sporedb/client.py
score_batch ¶
score_batch(profile: GoldenBatchProfile, batch_id: UUID) -> BatchScore
Score a batch against a golden batch profile.
Parameters:
-
profile(GoldenBatchProfile) –The :class:
GoldenBatchProfileto compare against. -
batch_id(UUID) –UUID of the batch to score.
Returns:
-
A(BatchScore) –class:
BatchScorewith a 0--100 similarity score -
BatchScore–derived from Dynamic Time Warping distance.
Raises:
-
NotImplementedError–If called in cloud mode.
Source code in src/sporedb/client.py
Data Models¶
sporedb.Batch ¶
Bases: BaseModel
A fermentation batch record.
Represents a single bioreactor run with its metadata, lifecycle state, canonical timestamps, and tags.
Attributes:
-
batch_id(UUID) –UUIDv7 identifier (auto-generated if not provided).
-
name(str) –Human-readable batch name (e.g.
"CHO-Run-001"). -
lifecycle(BatchLifecycle) –Current lifecycle state. Defaults to
PLANNED. -
timestamps(CanonicalTimestamps) –Canonical timestamps for key events.
-
metadata(BatchMetadata) –Strain, media, scale, and operator metadata.
-
tags(list[str]) –Free-form tags for categorization and filtering.
-
created_at(datetime) –Creation timestamp (auto-set to current UTC time).
-
updated_at(datetime) –Last-modified timestamp (auto-set to current UTC time).
Example
from sporedb.models.batch import Batch batch = Batch(name="CHO-Run-001") print(batch.batch_id)
sporedb.BatchMetadata ¶
Bases: BaseModel
Metadata describing the conditions of a fermentation batch.
Attributes:
-
strain(str | None) –Organism strain name (e.g.
"CHO-K1","E. coli BL21"). -
media(str | None) –Growth media description (e.g.
"DMEM + 10% FBS"). -
scale_liters(float | None) –Bioreactor working volume in liters.
-
operator(str | None) –Name of the operator running the batch.
-
extra(dict[str, str | int | float | bool]) –Additional key-value metadata. Values must be scalar types.
sporedb.BatchLifecycle ¶
Bases: StrEnum
Lifecycle states for a fermentation batch.
A batch progresses through these states from planning to completion:
PLANNED -> INOCULATED -> RUNNING -> HARVESTED (or ABORTED).
sporedb.CanonicalTimestamps ¶
Bases: BaseModel
Key timestamps in a fermentation batch lifecycle.
Attributes:
-
inoculation(datetime | None) –When the bioreactor was inoculated.
-
feed_start(datetime | None) –When feed addition began (fed-batch).
-
induction(datetime | None) –When gene expression was induced.
-
harvest(datetime | None) –When the batch was harvested.
sporedb.ImportResult ¶
Bases: BaseModel
Result of a data import operation.
Returned by :meth:~sporedb.SporeDB.import_csv and
:meth:~sporedb.SporeDB.import_excel to report import statistics.
Attributes:
-
batch_id(UUID) –UUID of the batch that was created or updated.
-
rows_imported(int) –Total number of rows successfully imported.
-
columns_mapped(dict[str, str]) –Mapping of source column names to SporeDB variable names.
-
units_converted(dict[str, tuple[str, str]]) –Mapping of variable names to
(source_unit, target_unit)tuples where unit conversion was applied. -
warnings(list[str]) –List of warning messages generated during import.
-
elapsed_seconds(float) –Wall-clock time for the import operation.
Example
result = db.import_csv("telemetry.csv", "CHO-Run-001") print( ... f"Imported {result.rows_imported} rows in {result.elapsed_seconds:.2f}s" ... )
sporedb.TelemetryRecord ¶
Bases: BaseModel
A single telemetry data point from a bioreactor sensor.
Represents one time-stamped measurement from an online sensor (e.g. dissolved oxygen, pH, temperature, optical density).
Attributes:
-
batch_id(UUID) –UUID of the batch this record belongs to.
-
ts(datetime) –Measurement timestamp (must be timezone-aware).
-
variable(str) –Sensor variable name (e.g.
"OD600","dissolved_oxygen"). -
value(float) –Measured value.
-
unit(str | None) –Unit of measurement (e.g.
"%","deg_C").
Raises:
-
ValueError–If ts is not timezone-aware.
Phase Detection¶
sporedb.DetectionConfig ¶
Bases: BaseModel
Configuration for changepoint detection algorithms.
Controls PELT algorithm parameters used by :class:~sporedb.SporeDB.detect_phases.
Attributes:
-
signal_variable(str) –Telemetry variable to analyze. Defaults to
"OD600". -
kernel(str) –Cost function kernel for
ruptures. Defaults to"rbf". -
min_size(int) –Minimum segment length. Defaults to
10. -
penalty(float | None) –Penalty value for PELT. Auto-calibrated if
None. -
smoothing_window(int) –Rolling average window applied before detection. Defaults to
5.
Example
from sporedb.analytics.models import DetectionConfig config = DetectionConfig(signal_variable="pH", min_size=20)
sporedb.PhaseAnnotation ¶
Bases: BaseModel
A detected or manually annotated phase boundary in a batch run.
Attributes:
-
annotation_id(UUID) –UUIDv7 identifier (auto-generated).
-
batch_id(UUID) –UUID of the batch this annotation belongs to.
-
phase_type(PhaseType) –The :class:
PhaseTypeof this segment. -
start_ts(datetime) –Start timestamp of the phase (must be timezone-aware).
-
end_ts(datetime) –End timestamp of the phase (must be timezone-aware).
-
signal_variable(str) –The telemetry variable that was analyzed.
-
confidence(float) –Detection confidence score (0.0 to 1.0). Defaults to
0.0. -
metadata(dict[str, object]) –Additional metadata (e.g. algorithm parameters).
Raises:
-
ValueError–If start_ts or end_ts is not timezone-aware.
sporedb.PhaseType ¶
Bases: StrEnum
Growth phases in a bioprocess batch.
Each phase corresponds to a distinct segment of the growth curve:
LAG: Initial adaptation period after inoculation.EXPONENTIAL: Rapid cell growth at maximum specific growth rate.STATIONARY: Growth rate equals death rate; nutrient limitation.DECLINE: Cell viability decreasing; nutrient depletion.UNKNOWN: Phase could not be classified.
sporedb.BatchMetrics ¶
Bases: BaseModel
Computed kinetic metrics for a specific phase of a batch run.
Attributes:
-
batch_id(UUID) –UUID of the batch.
-
phase_type(PhaseType) –The growth phase these metrics apply to.
-
mu(float | None) –Specific growth rate in h^-1.
-
qp(float | None) –Volumetric productivity in g/L/h.
-
yx_s(float | None) –Biomass yield coefficient (g biomass / g substrate).
-
yp_s(float | None) –Product yield coefficient (g product / g substrate).
-
r_squared(float | None) –Regression fit quality (0.0 to 1.0).
-
signal_variable(str) –Telemetry variable used for computation. Defaults to
"OD600".
sporedb.GoldenBatchProfile ¶
Bases: BaseModel
Reference trajectory from top-N aligned batches for golden batch scoring.
Stores the mean and standard deviation of aligned time-series trajectories for a set of reference (golden) batches.
Attributes:
-
profile_id(UUID) –UUIDv7 identifier (auto-generated).
-
variables(list[str]) –List of telemetry variable names in the profile.
-
mean_trajectory(list[list[float]]) –Mean trajectory matrix (n_timepoints x n_variables).
-
std_trajectory(list[list[float]]) –Standard deviation matrix (same shape as mean).
-
elapsed_hours(list[float]) –Elapsed time values for each row (n_timepoints,).
-
source_batch_ids(list[str]) –String UUIDs of the batches used to build this profile.
-
metadata(dict[str, object]) –Optional metadata (e.g. creation date, notes).
Assay & Measurements¶
sporedb.AssayMeasurement ¶
Bases: BaseModel
An offline assay measurement for a batch.
Represents a single analytical measurement taken outside the bioreactor (e.g. HPLC, cell count, LC-MS).
Attributes:
-
batch_id(UUID) –UUID of the batch this measurement belongs to.
-
ts(datetime) –Sampling timestamp (must be timezone-aware).
-
variable(str) –Measured quantity name (e.g.
"glucose","viable_cells"). -
value(float) –Measured value.
-
uncertainty(float) –Measurement uncertainty (1 sigma). Defaults to
0.0. -
unit(str | None) –Unit of measurement (e.g.
"g/L"). -
method(str | None) –Analytical method used (e.g.
"HPLC","cell_count").
Raises:
-
ValueError–If ts is not timezone-aware.
sporedb.UncertainValue ¶
Bases: BaseModel
A measurement with associated uncertainty (1 sigma).
Attributes:
-
value(float) –The measured value.
-
uncertainty(float) –One standard deviation uncertainty. Defaults to
0.0. -
unit(str) –Unit of measurement (e.g.
"g/L","cells/mL").
to_ufloat ¶
sporedb.UnitOperation ¶
Bases: BaseModel
A single processing step in a batch's lineage (DAG node).
Each unit operation represents one step in the bioprocess workflow
(e.g. seed train, fermentation, centrifugation). Operations form a
directed acyclic graph (DAG) via parent_ids.
Attributes:
-
operation_id(UUID) –UUIDv7 identifier (auto-generated).
-
batch_id(UUID) –UUID of the batch this operation belongs to.
-
name(str) –Operation name (e.g.
"seed_train","centrifugation"). -
operation_type(str) –Category (e.g.
"upstream","downstream","analytical"). -
parent_ids(list[UUID]) –UUIDs of parent operations in the DAG.
-
started_at(datetime | None) –When this operation started (timezone-aware).
-
ended_at(datetime | None) –When this operation completed (timezone-aware).
-
parameters(dict[str, str | int | float | bool]) –Process parameters as key-value pairs.
Raises:
-
ValueError–If started_at or ended_at is not timezone-aware.
Storage¶
sporedb.StorageEngine ¶
Manages DuckDB connection and data root directory.
Uses an in-memory DuckDB instance that reads/writes Parquet files directly. The data_root directory is created if it does not exist.
Parameters:
-
data_root(Path | str) –Path to the directory where Parquet files are stored. Created automatically if it does not exist.
Source code in src/sporedb/storage/engine.py
sporedb.BatchStore ¶
BatchStore(engine: StorageEngine)
Batch CRUD operations backed by a Parquet catalog file.
Uses PyArrow for direct catalog reads/writes (small file), and DuckDB for search queries with predicate pushdown.
Parameters:
-
engine(StorageEngine) –A :class:
StorageEngineinstance providing the DuckDB connection and data root path.
Source code in src/sporedb/storage/batch_store.py
create_batch ¶
Persist a new batch to the catalog.
Raises ValueError if batch_id already exists.
Source code in src/sporedb/storage/batch_store.py
delete_batch ¶
Remove a batch from the catalog. Returns True if found and deleted.
Source code in src/sporedb/storage/batch_store.py
get_batch ¶
get_batch(batch_id: UUID) -> Batch | None
Retrieve a batch by ID. Returns None if not found.
Source code in src/sporedb/storage/batch_store.py
list_batches ¶
list_batches() -> list[Batch]
Return all batches in the catalog. Empty list if no catalog exists.
Source code in src/sporedb/storage/batch_store.py
search_batches ¶
search_batches(filter: BatchFilter | None = None) -> list[Batch]
Search batches using compound filter conditions via DuckDB.
All filter values are passed as parameterized query parameters to prevent SQL injection.
Source code in src/sporedb/storage/batch_store.py
update_batch ¶
Update a batch in the catalog. Sets updated_at to now(UTC).
Reads all rows, replaces the matching batch_id, and rewrites.
Source code in src/sporedb/storage/batch_store.py
sporedb.TimeSeriesStore ¶
TimeSeriesStore(engine: StorageEngine)
Storage for telemetry and assay time-series data.
Uses Parquet files organized by batch_id (Hive partitioning) and DuckDB ASOF JOIN for unified temporal views.
Parameters:
-
engine(StorageEngine) –A :class:
StorageEngineinstance providing the DuckDB connection and data root path.
Source code in src/sporedb/storage/ts_store.py
append_assay ¶
append_assay(records: list[AssayMeasurement]) -> int
Append assay measurements to batch Parquet file. Returns count appended.
All records must share the same batch_id.
Source code in src/sporedb/storage/ts_store.py
append_telemetry ¶
append_telemetry(records: list[TelemetryRecord]) -> int
Append telemetry records to batch Parquet file. Returns count appended.
All records must share the same batch_id.
Source code in src/sporedb/storage/ts_store.py
get_assay ¶
Get all assay data for a batch. Returns empty DataFrame if none.
Source code in src/sporedb/storage/ts_store.py
get_assay_as_uncertain ¶
get_assay_as_uncertain(batch_id: UUID, variable: str) -> list[UncertainValue]
Get assay measurements as UncertainValue objects.
Used for uncertainty propagation.
Source code in src/sporedb/storage/ts_store.py
get_telemetry ¶
Get all telemetry for a batch. Returns empty DataFrame if none.
Source code in src/sporedb/storage/ts_store.py
get_unified_view ¶
ASOF JOIN telemetry and assay for a unified time-series view.
Links each assay measurement to the nearest prior telemetry timestamp. Uses DuckDB ASOF JOIN for efficient temporal alignment.
Source code in src/sporedb/storage/ts_store.py
sporedb.LineageStore ¶
LineageStore(engine: StorageEngine)
Storage and traversal for process lineage DAG.
Persists unit operations as Parquet files per batch, with parent_ids encoding DAG edges. Supports BFS traversal in both upstream and downstream directions.
Parameters:
-
engine(StorageEngine) –A :class:
StorageEngineinstance providing the DuckDB connection and data root path.
Source code in src/sporedb/storage/lineage_store.py
add_operation ¶
add_operation(operation: UnitOperation) -> UnitOperation
Add a unit operation to the lineage DAG. Returns the operation.
Source code in src/sporedb/storage/lineage_store.py
get_downstream ¶
get_downstream(operation_id: UUID, batch_id: UUID) -> list[UnitOperation]
Get all downstream operations from a given operation (BFS traversal).
Source code in src/sporedb/storage/lineage_store.py
get_operations ¶
get_operations(batch_id: UUID) -> list[UnitOperation]
Get all operations for a batch. Returns empty list if none.
Source code in src/sporedb/storage/lineage_store.py
get_upstream ¶
get_upstream(operation_id: UUID, batch_id: UUID) -> list[UnitOperation]
Get all upstream (ancestor) operations from a given operation.
Source code in src/sporedb/storage/lineage_store.py
Cloud¶
sporedb.CloudClient ¶
HTTP-based SporeDB client for the cloud tier.
Mirrors the method signatures of :class:sporedb.client.SporeDB so that
switching between local and cloud mode is transparent to callers.
Parameters¶
endpoint:
Base URL of the SporeDB cloud instance (e.g. https://cloud.sporedb.io).
api_key:
JWT bearer token for authentication.
timeout:
HTTP request timeout in seconds.
Source code in src/sporedb/cloud_client.py
align ¶
Align multiple batch runs via the cloud API.
Source code in src/sporedb/cloud_client.py
close ¶
compute_metrics ¶
Compute batch metrics via the cloud API.
Returns a list of metric dicts (one per detected phase).
Source code in src/sporedb/cloud_client.py
create_batch ¶
create_batch(name: str, *, strain: str | None = None, media: str | None = None, scale_liters: float | None = None, operator: str | None = None, tags: list[str] | None = None, inoculation: datetime | None = None) -> Batch
Create a new batch via the cloud API.
Source code in src/sporedb/cloud_client.py
delete_batch ¶
Delete a batch. Returns True if it existed.
Source code in src/sporedb/cloud_client.py
detect_phases ¶
Run phase detection via the cloud API.
Source code in src/sporedb/cloud_client.py
detect_phases_online ¶
detect_phases_online(batch_id: UUID, signal: str = 'OD600', *, hazard_rate: float = 0.004, threshold: float = 0.5) -> list[Any]
Run BOCPD online phase detection via the cloud API.
Source code in src/sporedb/cloud_client.py
export ¶
Export batch data via the cloud API.
Parameters:
-
batch_id(UUID) –Batch to export.
-
format(str, default:'csv') –"csv"or"arrow".
Returns:
-
bytes–Raw bytes of the exported data.
Source code in src/sporedb/cloud_client.py
get_assay ¶
Return assay measurements for a batch as a pandas DataFrame.
Source code in src/sporedb/cloud_client.py
get_batch ¶
get_batch(batch_id: UUID) -> Batch | None
Retrieve a batch by ID, or None if not found.
Source code in src/sporedb/cloud_client.py
get_telemetry ¶
Return telemetry data for a batch as a pandas DataFrame.
Source code in src/sporedb/cloud_client.py
query ¶
Execute a bioprocess DSL query via the cloud API.