luminarycloud.pipelines

Submodules

Attributes

ArgNamedVariableSet

Deprecated alias for PP_NAMED_VARIABLE_SET_ID.

PP_NAMED_VARIABLE_SET_ID

A constant PipelineParameter that can be used in a PipelineArgs params list to add a Named Variable

Stage

Exceptions

StopRun

Raised by RunScript code to indicate that the pipeline run should stop intentionally.

Classes

BoolPipelineParameter

A Bool Pipeline Parameter can replace a hard-coded bool in Pipeline operator arguments to

CreateGeometry

Creates a new geometry by copying a base geometry and optionally mutating it by applying a named

CreateGeometryOutputs

The outputs of the CreateGeometry stage.

CreateMesh

Generates a Mesh from a Geometry.

CreateMeshOutputs

The outputs of the CreateMesh stage.

CreateSimulation

Creates and runs a Simulation.

CreateSimulationOutputs

The outputs of the CreateSimulation stage.

FloatPipelineParameter

A Float Pipeline Parameter can replace a hard-coded float in Pipeline operator arguments to

FlowableIOSchema

Typed representation of RunScript input/output schema.

FlowableType

Canonical flowable type identifiers.

IntPipelineParameter

An Int Pipeline Parameter can replace a hard-coded int in Pipeline operator arguments to

LogLine

A line of log output from a pipeline task.

Mesh

DEPRECATED: Use CreateMesh instead.

MeshOutputs

The outputs of the Mesh stage.

Pipeline

A Pipeline is a reusable, parameterized workflow that automates large batches of work on the

PipelineArgs

A table of arguments with which one invokes a Pipeline to create a PipelineJob.

PipelineJobRecord

A persisted pipeline job.

PipelineJobRunRecord

A run of a pipeline job.

PipelineJobRunStatus

The status of a pipeline job run.

PipelineJobStatus

The status of a pipeline job.

PipelineOutputGeometry

A representation of a Geometry in a Pipeline.

PipelineOutputMesh

A representation of a Mesh in a Pipeline.

PipelineOutputSimulation

A representation of a Simulation in a Pipeline.

PipelineParameter

Base class for all concrete PipelineParameters.

PipelineRecord

A persisted pipeline.

PipelineTaskRecord

A task within a pipeline job run.

ReadGeometry

Reads a Geometry into the Pipeline and runs a geometry check.

ReadGeometryOutputs

The outputs of the ReadGeometry stage.

ReadGeometryV1

DEPRECATED: Use CreateGeometry or ReadGeometry instead.

ReadMesh

Reads a Mesh into the Pipeline.

ReadMeshOutputs

The outputs of the ReadMesh stage.

RunScript

RunScript is a stage that runs a user-provided Python function.

RunScriptContext

Context object passed to the user's function at runtime.

Simulate

DEPRECATED: Use CreateSimulation instead.

SimulateOutputs

The outputs of the Simulate stage.

StageDefinition

A stage of a pipeline.

StringPipelineParameter

A String Pipeline Parameter can replace a hard-coded string in Pipeline operator arguments to

TaskStatus

The status of a pipeline task.

WaitForMesh

Waits for a Mesh to be ready for simulation.

WaitForMeshOutputs

The outputs of the WaitForMesh stage.

Functions

create_pipeline(→ PipelineRecord)

Create a new pipeline.

create_pipeline_job(→ PipelineJobRecord)

Create a new pipeline job.

get_pipeline(→ PipelineRecord)

Get a pipeline by ID.

get_pipeline_job(→ PipelineJobRecord)

Get a pipeline job by ID.

get_pipeline_job_run(→ PipelineJobRunRecord)

Get a pipeline job run by pipeline job ID and index.

iterate_pipeline_jobs(→ Iterator[PipelineJobRecord])

Iterate over all pipeline jobs.

iterate_pipelines(→ Iterator[PipelineRecord])

Iterate over all pipelines.

stage(→ Callable[[Callable[Ellipsis, dict[str, ...)

Decorator for building a RunScript stage from a Python function.

Package Contents

exception StopRun

Raised by RunScript code to indicate that the pipeline run should stop intentionally.

class BoolPipelineParameter(name: str)

A Bool Pipeline Parameter can replace a hard-coded bool in Pipeline operator arguments to allow its value to be set when the Pipeline is invoked.

name
property type: str
class CreateGeometry(*, stage_name: str | None = None, base_geometry_id: str | luminarycloud.pipelines.parameters.StringPipelineParameter, geo_name: str | luminarycloud.pipelines.parameters.StringPipelineParameter | None = None)

Creates a new geometry by copying a base geometry and optionally mutating it by applying a named variable set (NVS). Also runs a geometry check and fails if the check produces errors.

Parameters:
base_geometry_idstr | StringPipelineParameter

The ID of the geometry to copy as a basis of the newly created one.

geo_namestr | StringPipelineParameter | None

The name to assign to the new geometry. If None, a default name will be used.

downstream_stages() list[Stage]
inputs_dict() dict[str, tuple[Stage, str]]
is_source() bool
outputs
class CreateGeometryOutputs

The outputs of the CreateGeometry stage.

downstream_inputs() list[luminarycloud.pipelines.flowables.PipelineInput]
geometry: luminarycloud.pipelines.flowables.PipelineOutputGeometry

The resultant Geometry.

class CreateMesh(*, stage_name: str | None = None, geometry: luminarycloud.pipelines.flowables.PipelineOutputGeometry, mesh_name: str | luminarycloud.pipelines.parameters.StringPipelineParameter | None = None, target_cv_count: int | luminarycloud.pipelines.parameters.IntPipelineParameter | None = None, mesh_gen_params: luminarycloud.meshing.MeshGenerationParams | None = None)

Generates a Mesh from a Geometry.

This is the basic stage for generating a minimal Mesh, a Mesh with a target number of control volumes, or a Mesh from arbitrary MeshGenerationParams.

Parameters:
geometryPipelineOutputGeometry

The Geometry to mesh.

mesh_namestr | StringPipelineParameter | None

The name to assign to the Mesh. If None, a default name will be used.

target_cv_countint | IntPipelineParameter | None

The target number of control volumes to generate. If None, a minimal mesh will be generated. Default: None

mesh_gen_paramsMeshGenerationParams | None

Mesh generation parameters. If provided, target_cv_count must be None.

downstream_stages() list[Stage]
inputs_dict() dict[str, tuple[Stage, str]]
is_source() bool
outputs
class CreateMeshOutputs

The outputs of the CreateMesh stage.

downstream_inputs() list[luminarycloud.pipelines.flowables.PipelineInput]
mesh: luminarycloud.pipelines.flowables.PipelineOutputMesh

The Mesh generated from the given Geometry.

class CreateSimulation(*, stage_name: str | None = None, mesh: luminarycloud.pipelines.flowables.PipelineOutputMesh, sim_name: str | luminarycloud.pipelines.parameters.StringPipelineParameter | None = None, sim_template_id: str | luminarycloud.pipelines.parameters.StringPipelineParameter, batch_processing: bool | luminarycloud.pipelines.parameters.BoolPipelineParameter = True, gpu_type: luminarycloud.enum.GPUType | str | luminarycloud.pipelines.parameters.StringPipelineParameter | None = None, gpu_count: int | luminarycloud.pipelines.parameters.IntPipelineParameter | None = None)

Creates and runs a Simulation.

Parameters:
meshPipelineOutputMesh

The Mesh to use for the Simulation.

sim_template_idstr | StringPipelineParameter

The ID of the SimulationTemplate to use for the Simulation.

sim_namestr | StringPipelineParameter | None

The name to assign to the Simulation. If None, a default name will be used.

batch_processingbool | BoolPipelineParameter

If True, the Simulation will run as a standard job. If False, the Simulation will run as a priority job. Default: True

gpu_typeGPUType | str | StringPipelineParameter | None

GPU type to use for the Simulation (e.g. GPUType.H100, "h100"). If None, the default GPU type will be used.

gpu_countint | IntPipelineParameter | None

Number of GPUs to use for the Simulation. Only relevant if gpu_type is specified. If set to 0 or omitted and gpu_type is specified, the number of GPUs will be automatically determined.

downstream_stages() list[Stage]
inputs_dict() dict[str, tuple[Stage, str]]
is_source() bool
outputs
class CreateSimulationOutputs

The outputs of the CreateSimulation stage.

downstream_inputs() list[luminarycloud.pipelines.flowables.PipelineInput]
simulation: luminarycloud.pipelines.flowables.PipelineOutputSimulation

The Simulation.

class FloatPipelineParameter(name: str)

A Float Pipeline Parameter can replace a hard-coded float in Pipeline operator arguments to allow its value to be set when the Pipeline is invoked.

name
property type: str
class FlowableIOSchema

Typed representation of RunScript input/output schema.

classmethod from_dict(data: Mapping[str, Mapping[str, FlowableType | str]]) FlowableIOSchema
to_dict() dict[str, dict[str, str]]
inputs: dict[str, FlowableType]
outputs: dict[str, FlowableType]
class FlowableType

Canonical flowable type identifiers.

GEOMETRY = 'Geometry'
MESH = 'Mesh'
SIMULATION = 'Simulation'
class IntPipelineParameter(name: str)

An Int Pipeline Parameter can replace a hard-coded int in Pipeline operator arguments to allow its value to be set when the Pipeline is invoked.

name
property type: str
class LogLine

A line of log output from a pipeline task.

classmethod from_json(json: dict) LogLine
level: int

The level of the log line

message: str

The text of the log line

timestamp: datetime.datetime

The timestamp of the log line

class Mesh(*, stage_name: str | None = None, geometry: luminarycloud.pipelines.flowables.PipelineOutputGeometry, mesh_name: str | luminarycloud.pipelines.parameters.StringPipelineParameter | None = None, target_cv_count: int | luminarycloud.pipelines.parameters.IntPipelineParameter | None = None, mesh_gen_params: luminarycloud.meshing.MeshGenerationParams | None = None)

DEPRECATED: Use CreateMesh instead.

Generates a Mesh from a Geometry.

This is the basic stage for generating a minimal Mesh, a Mesh with a target number of control volumes, or a Mesh from arbitrary MeshGenerationParams.

Parameters:
geometryPipelineOutputGeometry

The Geometry to mesh.

mesh_namestr | StringPipelineParameter | None

The name to assign to the Mesh. If None, a default name will be used.

target_cv_countint | IntPipelineParameter | None

The target number of control volumes to generate. If None, a minimal mesh will be generated. Default: None

mesh_gen_paramsMeshGenerationParams | None

Mesh generation parameters. If provided, target_cv_count must be None.

downstream_stages() list[Stage]
inputs_dict() dict[str, tuple[Stage, str]]
is_source() bool
outputs
class MeshOutputs

The outputs of the Mesh stage.

downstream_inputs() list[luminarycloud.pipelines.flowables.PipelineInput]
mesh: luminarycloud.pipelines.flowables.PipelineOutputMesh

The Mesh generated from the given Geometry.

class Pipeline(stages: list[Stage])

A Pipeline is a reusable, parameterized workflow that automates large batches of work on the Luminary platform.

get_stage_id(stage: Stage) str
pipeline_params() set[PipelineParameter]
to_yaml() str
stages
class PipelineArgs(params: list[luminarycloud.pipelines.core.PipelineParameter], args: list[list[PipelineArgValueType]])

A table of arguments with which one invokes a Pipeline to create a PipelineJob.

Each column of the PipelineArgs table is a PipelineParameter, and each row is an ordered list of values to substitute in for those parameters. The resulting PipelineJob will have one run per row. Every PipelineParameter of the Pipeline must be included in the PipelineArgs columns. The PP_NAMED_VARIABLE_SET_ID PipelineParameter may also be included.

column_for(param_name: str) int
has_column_for(param_name: str) bool
print_as_table() None
params
rows
class PipelineJobRecord

A persisted pipeline job.

artifacts() list[dict]

Returns a list of artifacts that were produced by this pipeline job.

Artifacts are things like Geometries, Meshes, and Simulations. Each artifact is a dictionary with an “id” key, which is an identifier for the artifact.

Warning

This feature is experimental and may change or be removed in the future.

Returns:
list[dict]

A list of artifact dictionaries.

cancel() None

Cancel this running pipeline job.

This will request cancellation of the underlying Prefect flow run. The job should eventually transition to a cancelled terminal state once the backend processes the cancellation.

Raises:
HTTPError

If the pipeline job cannot be cancelled (e.g., not found, not running, or lacks the necessary Prefect flow run ID).

delete() None

Delete this pipeline job.

This will permanently delete the pipeline job and all associated runs and tasks. This operation cannot be undone.

Raises:
HTTPException

If the pipeline job does not exist or if you do not have permission to delete it.

Examples

>>> pipeline_job = pipelines.get_pipeline_job("pipelinejob-123")
>>> pipeline_job.delete()
classmethod from_json(json: dict) PipelineJobRecord
get_concurrency_limits() dict[str, int]

Returns the concurrency limits for this pipeline job.

Returns:
dict[str, int]

A dictionary mapping stage IDs to their concurrency limits.

logs() list[LogLine]

Returns a list of log lines for this pipeline job.

Each log line is a LogLine object, which has a timestamp, level, and message.

Returns:
list[LogLine]

A list of LogLine objects.

pause() None

Pause this running pipeline job.

This will prevent new tasks from being scheduled while allowing in-progress tasks to complete. The job status will be set to PAUSED and all stage concurrency limits will be temporarily set to 0.

Call resume() to continue execution.

Raises:
HTTPError

If the pipeline job cannot be paused (e.g., not found or not in RUNNING state).

pipeline() PipelineRecord

Returns the pipeline that this pipeline job was created from.

Returns:
PipelineRecord

The PipelineRecord for the pipeline that this pipeline job was created from.

refresh() PipelineJobRecord

Refresh the pipeline job record from the API. This modifies the current object in place.

Returns:
PipelineJobRecord

The updated PipelineJobRecord object. This is the same object, but with the latest data from the API.

resume() None

Resume this paused pipeline job.

This will restore the job status to RUNNING and restore the original concurrency limits, allowing new tasks to be scheduled again.

Raises:
HTTPError

If the pipeline job cannot be resumed (e.g., not found or not in PAUSED state).

runs(page_size: int = 1000) Iterator[PipelineJobRunRecord]

Returns an iterator of runs for this pipeline job.

Runs are fetched lazily in batches, and the size of the batch is controlled by the page_size parameter.

Parameters:
page_sizeint, optional

Number of runs to fetch per page. Defaults to 1000.

Returns:
Iterator[PipelineJobRunRecord]

An iterator of PipelineJobRunRecord objects.

Examples

>>> job = pipelines.get_pipeline_job("job-123")
>>> for run in job.runs():
...     print(run.id)
>>> # Or convert to a list if needed:
>>> all_runs = list(job.runs())
set_concurrency_limits(limits: dict[str, int]) None

Sets the concurrency limits for this pipeline job.

Parameters:
limitsdict[str, int]

A dictionary mapping stage IDs to their concurrency limits.

wait(*, interval_seconds: float = 5, timeout_seconds: float = float('inf'), print_logs: bool = False) Literal['completed', 'failed', 'cancelled']

Wait for the pipeline job to complete, fail, or be cancelled.

This method polls the pipeline job status at regular intervals until it reaches a terminal state (completed, failed, or cancelled).

Parameters:
interval_secondsfloat

Number of seconds between status polls. Default is 5 seconds.

timeout_secondsfloat

Number of seconds before the operation times out. Default is infinity.

print_logsbool

If True, prints new log lines as they become available. Default is False.

Returns:
Literal[“completed”, “failed”, “cancelled”]

The final status of the pipeline job.

Raises:
TimeoutError

If the pipeline job does not complete within the specified timeout.

Examples

>>> pipeline_job = pipelines.create_pipeline_job(pipeline.id, args, "My Job")
>>> final_status = pipeline_job.wait(timeout_seconds=3600)
>>> print(f"Pipeline job finished with status: {final_status}")
completed_at: datetime.datetime | None

The time the job finished running.

created_at: datetime.datetime

The time the job was created.

description: str | None

The description of the job.

id: str

The ID of the job.

name: str

The name of the job.

paused_at: datetime.datetime | None

The time the job was paused.

pipeline_id: str

The ID of the pipeline that this job belongs to.

started_at: datetime.datetime | None

The time the job started running.

status: PipelineJobStatus

The status of the job.

updated_at: datetime.datetime

The time the job was last updated.

property url: str
class PipelineJobRunRecord

A run of a pipeline job.

artifacts() list[dict]

Returns a list of artifacts that were produced by this pipeline job run.

Artifacts are things like Geometries, Meshes, and Simulations. Each artifact is a dictionary with an “id” key, which is an identifier for the artifact.

Warning

This feature is experimental and may change or be removed in the future.

Returns:
list[dict]

A list of artifact dictionaries.

classmethod from_json(json: dict) PipelineJobRunRecord
logs() list[LogLine]

Returns a list of log lines for this pipeline job run.

Each log line is a LogLine object, which has a timestamp, level, and message.

Returns:
list[LogLine]

A list of LogLine objects.

pipeline_job() PipelineJobRecord

Returns the pipeline job that this pipeline job run was created from.

Returns:
PipelineJobRecord

The PipelineJobRecord for the pipeline job that this pipeline job run was created from.

refresh() PipelineJobRunRecord

Refresh the pipeline job run record from the API. This modifies the current object in place.

Returns:
PipelineJobRunRecord

The updated PipelineJobRunRecord object. This is the same object, but with the latest data from the API.

arguments: list[luminarycloud.pipelines.arguments.PipelineArgValueType]

The arguments for this run.

idx: int

The index of this run. Corresponds to the row of the PipelineArgs table that was used to create this run.

pipeline_job_id: str

The ID of the pipeline job that this run belongs to.

status: PipelineJobRunStatus

The status of this run.

tasks: list[PipelineTaskRecord]

The tasks for this run.

class PipelineJobRunStatus

The status of a pipeline job run.

Attributes:
PENDING

The run is scheduled but has not yet had any tasks run.

RUNNING

The run has at least one task that is running or completed, but still has incomplete tasks.

COMPLETED

All of the run’s tasks have completed successfully.

FAILED

At least one of the run’s tasks has failed.

CANCELLED

While the run was running, its job got cancelled, so the run was cancelled.

CANCELLED = 'cancelled'
COMPLETED = 'completed'
FAILED = 'failed'
PENDING = 'pending'
RUNNING = 'running'
class PipelineJobStatus

The status of a pipeline job.

Attributes:
PENDING

The job has been created but not yet scheduled for execution.

SCHEDULED

The job has been scheduled for execution but is not yet running.

RUNNING

The job is currently running.

PAUSING

A pause was requested, the job is waiting for in progress tasks to complete and not scheduling new ones.

PAUSED

The job is paused, no new tasks are scheduled.

CANCELLING

A cancel was requested, the job is killing any in progress tasks and not scheduling new ones.

CANCELLED

The job has been cancelled.

COMPLETED

The job has completed.

FAILED

The job has failed due to an internal service error.

CANCELLED = 'cancelled'
CANCELLING = 'cancelling'
COMPLETED = 'completed'
FAILED = 'failed'
PAUSED = 'paused'
PAUSING = 'pausing'
PENDING = 'pending'
RUNNING = 'running'
SCHEDULED = 'scheduled'
class PipelineOutputGeometry(owner: luminarycloud.pipelines.core.Stage, name: str)

A representation of a Geometry in a Pipeline.

downstream_inputs: list[PipelineInput] = []
name
owner
class PipelineOutputMesh(owner: luminarycloud.pipelines.core.Stage, name: str)

A representation of a Mesh in a Pipeline.

downstream_inputs: list[PipelineInput] = []
name
owner
class PipelineOutputSimulation(owner: luminarycloud.pipelines.core.Stage, name: str)

A representation of a Simulation in a Pipeline.

downstream_inputs: list[PipelineInput] = []
name
owner
class PipelineParameter(name: str)

Base class for all concrete PipelineParameters.

name
property type: str
class PipelineRecord

A persisted pipeline.

delete() None

Delete this pipeline.

This will permanently delete the pipeline and all associated pipeline jobs. This operation cannot be undone.

Raises:
HTTPException

If the pipeline does not exist or if you do not have permission to delete it.

Examples

>>> pipeline = pipelines.get_pipeline("pipeline-123")
>>> pipeline.delete()
classmethod from_json(json: dict) PipelineRecord
invoke(args: luminarycloud.pipelines.PipelineArgs, job_name: str, description: str | None = None, concurrency_limits: dict[str, int] | None = None) PipelineJobRecord

Invoke this pipeline with the given arguments.

Parameters:
argsPipelineArgs

Arguments to pass to the pipeline.

job_namestr

Name of the pipeline job.

descriptionstr, optional

Description of the pipeline job.

concurrency_limitsdict[str, int], optional

A dictionary mapping stage IDs to their concurrency limits.

Returns:
PipelineJobRecord

The pipeline job produced from the invocation.

pipeline_jobs(page_size: int = 100) Iterator[PipelineJobRecord]

Returns an iterator of pipeline jobs that were created from this pipeline.

Jobs are fetched lazily in batches, and the size of the batch is controlled by the page_size parameter.

Parameters:
page_sizeint, optional

Number of pipeline jobs to fetch per page. Defaults to 100.

Returns:
Iterator[PipelineJobRecord]

An iterator of PipelineJobRecord objects.

Examples

>>> pipeline = pipelines.get_pipeline("pipeline-123")
>>> for job in pipeline.pipeline_jobs():
...     print(job.name)
>>> # Or convert to a list if needed:
>>> all_jobs = list(pipeline.pipeline_jobs())
refresh() PipelineRecord

Refresh the pipeline record from the API. This modifies the current object in place.

Returns:
PipelineRecord

The updated PipelineRecord object. This is the same object, but with the latest data from the API.

created_at: datetime.datetime

The time the pipeline was created.

definition_yaml: str

The YAML definition of the pipeline.

description: str | None

The description of the pipeline.

id: str

The ID of the pipeline.

name: str

The name of the pipeline.

updated_at: datetime.datetime

The time the pipeline was last updated.

property url: str
class PipelineTaskRecord

A task within a pipeline job run.

classmethod from_json(json: dict) PipelineTaskRecord
artifacts: dict[str, dict]

The artifacts produced by the task.

created_at: datetime.datetime

The time the task was created.

error_messages: list[str] | None

Error messages from logs if the task failed.

stage: StageDefinition

The stage that the task belongs to.

status: TaskStatus

The status of the task.

updated_at: datetime.datetime

The time the task was last updated.

class ReadGeometry(*, stage_name: str | None = None, geometry_id: str | luminarycloud.pipelines.parameters.StringPipelineParameter)

Reads a Geometry into the Pipeline and runs a geometry check.

Reads the geometry by ID without copying or modifying it.

Parameters:
geometry_idstr | StringPipelineParameter

The ID of the Geometry to retrieve.

downstream_stages() list[Stage]
inputs_dict() dict[str, tuple[Stage, str]]
is_source() bool
outputs
class ReadGeometryOutputs

The outputs of the ReadGeometry stage.

downstream_inputs() list[luminarycloud.pipelines.flowables.PipelineInput]
geometry: luminarycloud.pipelines.flowables.PipelineOutputGeometry

The resultant Geometry.

class ReadGeometryV1(*, stage_name: str | None = None, geometry_id: str | luminarycloud.pipelines.parameters.StringPipelineParameter, geo_name: str | luminarycloud.pipelines.parameters.StringPipelineParameter | None = None, use_geo_without_copying: bool | luminarycloud.pipelines.parameters.BoolPipelineParameter = False)

DEPRECATED: Use CreateGeometry or ReadGeometry instead.

Reads a Geometry into the Pipeline, optionally makes a copy of it, and optionally mutates that copy by applying a named variable set (NVS). Also runs a geometry check and fails if the check produces errors.

Parameters:
geometry_idstr | StringPipelineParameter

The ID of the Geometry to retrieve (and copy).

geo_namestr | StringPipelineParameter | None

The name to assign to the Geometry copy. If None, a default name will be used.

use_geo_without_copyingbool | BoolPipelineParameter

By default, this is False, meaning that each Geometry this stage references will be copied and the PipelineJob will actually operate on the copied Geometry. This is done for multiple reasons, one of which is so that a PipelineJob can be based on a single parametric Geometry which each run of the job modifies by applying a NamedVariableSet. That modification mutates the Geometry, so those runs can only happen in parallel without interfering with each other if they each operate on a different copy of the Geometry. (The second reason is discussed in the “Details” section below.)

However, if you’ve already prepared your Geometry in advance and you don’t want the PipelineJob to create copies, you can set this to True. In that case, the referenced Geometry will be used directly without being copied.

IMPORTANT: If you set this to True, you must ensure no two PipelineJobRuns operate on the same Geometry, i.e. no two PipelineArgs rows contain the same Geometry ID.

downstream_stages() list[Stage]
inputs_dict() dict[str, tuple[Stage, str]]
is_source() bool
outputs
class ReadMesh(*, stage_name: str | None = None, mesh_id: str | luminarycloud.pipelines.parameters.StringPipelineParameter)

Reads a Mesh into the Pipeline.

Does not complete until the Mesh is ready for simulation. Fails if the meshing job completes with errors.

Parameters:
mesh_idstr | StringPipelineParameter

The ID of the Mesh to retrieve.

downstream_stages() list[Stage]
inputs_dict() dict[str, tuple[Stage, str]]
is_source() bool
outputs
class ReadMeshOutputs

The outputs of the ReadMesh stage.

downstream_inputs() list[luminarycloud.pipelines.flowables.PipelineInput]
mesh: luminarycloud.pipelines.flowables.PipelineOutputMesh

The Mesh read from the given mesh_id.

class RunScript(script: Callable[Ellipsis, dict[str, Any] | None] | str, *, stage_name: str | None = None, inputs: dict[str, luminarycloud.pipelines.flowables.PipelineOutput] | None = None, outputs: Mapping[str, type[luminarycloud.pipelines.flowables.PipelineOutput] | str] | None = None, entrypoint: str | None = None, params: dict[str, Any] | None = None)

RunScript is a stage that runs a user-provided Python function.

All inputs, outputs, and params must be declared. Inputs and params get passed to the function at runtime, and output types are validated at runtime.

If the function specifies a parameter named context, it will be passed a RunScriptContext.

RunScript stages have a very limited runtime. They are not suitable for long-running operations.

While you can instantiate a RunScript stage directly, the usual way to construct one is to decorate a function with the @stage decorator.

Examples

If you’re using a pipeline to generate geometry variants, you can codify some validations on those variants to flag problems early.

pp_geo_id = lc.pipelines.StringPipelineParameter("geo_id")
pp_sim_template_id = lc.pipelines.StringPipelineParameter("sim_template_id")
pp_expected_num_volumes = lc.pipelines.IntPipelineParameter("expected_num_volumes")

create_geo = lc.pipelines.CreateGeometry(base_geometry_id=pp_geo_id)

@lc.pipelines.stage(
    inputs={"geometry": create_geo.outputs.geometry},
    outputs={"geometry": lc.pipelines.PipelineOutputGeometry},
    params={"expected_num_volumes": pp_expected_num_volumes},
)
def validate_geo(geometry: lc.Geometry, expected_num_volumes: int):
    _surfaces, volumes = geometry.list_entities()
    if len(volumes) != expected_num_volumes:
        raise lc.pipelines.StopRun(
            f"Geometry {geometry.id} has {len(volumes)} volumes, expected {expected_num_volumes}"
        )

    return {"geometry": geometry}

create_mesh = lc.pipelines.CreateMesh(geometry=validate_geo.outputs.geometry)

create_sim = lc.pipelines.CreateSimulation(
    sim_template_id=pp_sim_template_id,
    mesh=create_mesh.outputs.mesh,
)

pipeline = lc.pipelines.create_pipeline(
    name="Custom geo validations demo",
    stages=[create_geo, validate_geo, create_mesh, create_sim],
)
downstream_stages() list[Stage]
inputs_dict() dict[str, tuple[Stage, str]]
is_source() bool
outputs
class RunScriptContext

Context object passed to the user’s function at runtime.

idempotency_key: str

A key that is guaranteed to be unique to the current RunScript task. Pipelines provide at-least-once execution, meaning that a task may be executed multiple times under certain failure conditions. This is useful to pass as a request_id for SDK calls that support that parameter for idempotency.

class Simulate(*, stage_name: str | None = None, mesh: luminarycloud.pipelines.flowables.PipelineOutputMesh, sim_name: str | luminarycloud.pipelines.parameters.StringPipelineParameter | None = None, sim_template_id: str | luminarycloud.pipelines.parameters.StringPipelineParameter, batch_processing: bool | luminarycloud.pipelines.parameters.BoolPipelineParameter = True, gpu_type: luminarycloud.enum.GPUType | str | luminarycloud.pipelines.parameters.StringPipelineParameter | None = None, gpu_count: int | luminarycloud.pipelines.parameters.IntPipelineParameter | None = None)

DEPRECATED: Use CreateSimulation instead.

Runs a Simulation.

Parameters:
meshPipelineOutputMesh

The Mesh to use for the Simulation.

sim_template_idstr | StringPipelineParameter

The ID of the SimulationTemplate to use for the Simulation.

sim_namestr | StringPipelineParameter | None

The name to assign to the Simulation. If None, a default name will be used.

batch_processingbool | BoolPipelineParameter

If True, the Simulation will run as a standard job. If False, the Simulation will run as a priority job. Default: True

gpu_typeGPUType | str | StringPipelineParameter | None

GPU type to use for the Simulation (e.g. GPUType.H100, "h100"). If None, the default GPU type will be used.

gpu_countint | IntPipelineParameter | None

Number of GPUs to use for the Simulation. Only relevant if gpu_type is specified. If set to 0 or omitted and gpu_type is specified, the number of GPUs will be automatically determined.

downstream_stages() list[Stage]
inputs_dict() dict[str, tuple[Stage, str]]
is_source() bool
outputs
class SimulateOutputs

The outputs of the Simulate stage.

downstream_inputs() list[luminarycloud.pipelines.flowables.PipelineInput]
simulation: luminarycloud.pipelines.flowables.PipelineOutputSimulation

The Simulation.

class StageDefinition

A stage of a pipeline.

classmethod from_json(json: dict) StageDefinition
id: str

The stage ID

name: str

The stage name

stage_type: str

The stage type

class StringPipelineParameter(name: str)

A String Pipeline Parameter can replace a hard-coded string in Pipeline operator arguments to allow its value to be set when the Pipeline is invoked.

name
property type: str
class TaskStatus

The status of a pipeline task.

Attributes:
PENDING

The task has been created but not yet scheduled for execution.

RUNNING

The task is currently running.

COMPLETED

The task has completed successfully.

FAILED

The task has failed.

UPSTREAM_FAILED

The task has been skipped because an upstream dependency failed.

CANCELLED

While the task was running, its job got cancelled, so the task was cancelled.

CANCELLED = 'cancelled'
COMPLETED = 'completed'
FAILED = 'failed'
PENDING = 'pending'
RUNNING = 'running'
UPSTREAM_FAILED = 'upstream_failed'
class WaitForMesh(*, stage_name: str | None = None, mesh: luminarycloud.pipelines.flowables.PipelineOutputMesh)

Waits for a Mesh to be ready for simulation.

This is useful if you have a more complicated meshing setup than the Mesh stage can handle.

For example, you can use a RunScript stage to call lc.create_mesh() with arbitrary meshing parameters, then pass the resulting mesh to this stage to wait for it to be ready. Mesh creation can take minutes to hours, and RunScript has a short timeout, so you can’t wait for it to complete in the RunScript stage, but lc.create_mesh() returns immediately and mesh creation happens asynchronously, so you can wait for it to be ready in this stage.

Parameters:
meshPipelineOutputMesh

The Mesh to wait for.

downstream_stages() list[Stage]
inputs_dict() dict[str, tuple[Stage, str]]
is_source() bool
outputs
class WaitForMeshOutputs

The outputs of the WaitForMesh stage.

downstream_inputs() list[luminarycloud.pipelines.flowables.PipelineInput]
mesh: luminarycloud.pipelines.flowables.PipelineOutputMesh

The Mesh that will be waited for.

create_pipeline(name: str, pipeline: luminarycloud.pipelines.Pipeline | str | None = None, stages: list[luminarycloud.pipelines.Stage] | None = None, description: str | None = None) PipelineRecord

Create a new pipeline.

Exactly one of pipeline or stages must be provided.

Parameters:
namestr

Name of the pipeline.

pipelinePipeline | str, optional

The pipeline to create. Accepts a Pipeline object or a YAML-formatted pipeline definition.

stageslist[Stage], optional

The stages to create the pipeline from.

descriptionstr, optional

Description of the pipeline.

create_pipeline_job(pipeline_id: str, args: luminarycloud.pipelines.PipelineArgs, name: str, description: str | None = None, concurrency_limits: dict[str, int] | None = None) PipelineJobRecord

Create a new pipeline job.

Parameters:
pipeline_idstr

ID of the pipeline to invoke.

argsPipelineArgs

Arguments to pass to the pipeline.

namestr

Name of the pipeline job.

descriptionstr, optional

Description of the pipeline job.

concurrency_limitsdict[str, int], optional

A dictionary mapping stage IDs to their concurrency limits.

get_pipeline(id: str) PipelineRecord

Get a pipeline by ID.

Parameters:
idstr

ID of the pipeline to fetch.

get_pipeline_job(id: str) PipelineJobRecord

Get a pipeline job by ID.

get_pipeline_job_run(pipeline_job_id: str, idx: int) PipelineJobRunRecord

Get a pipeline job run by pipeline job ID and index.

Parameters:
pipeline_job_idstr

ID of the pipeline job.

idxint

Index of the pipeline job run.

iterate_pipeline_jobs(page_size: int = 100) Iterator[PipelineJobRecord]

Iterate over all pipeline jobs.

Jobs are fetched lazily in batches, and the size of the batch is controlled by the page_size parameter.

Parameters:
page_sizeint, optional

Number of pipeline jobs to fetch per page. Defaults to 100.

Returns:
Iterator[PipelineJobRecord]

An iterator that yields PipelineJobRecord objects one at a time.

Examples

Iterate over all pipeline jobs:

>>> for job in pipelines.iterate_pipeline_jobs():
...     print(job.name)

Convert to a list if needed:

>>> all_jobs = list(pipelines.iterate_pipeline_jobs())
iterate_pipelines(page_size: int = 100) Iterator[PipelineRecord]

Iterate over all pipelines.

Pipelines are fetched lazily in batches, and the size of the batch is controlled by the page_size parameter.

Parameters:
page_sizeint, optional

Number of pipelines to fetch per page. Defaults to 100.

Returns:
Iterator[PipelineRecord]

An iterator that yields PipelineRecord objects one at a time.

Examples

Iterate over all pipelines:

>>> for pipeline in pipelines.iterate_pipelines():
...     print(pipeline.name)

Convert to a list if needed:

>>> all_pipelines = list(pipelines.iterate_pipelines())
stage(*, inputs: dict[str, luminarycloud.pipelines.flowables.PipelineOutput] | None = None, outputs: dict[str, type[luminarycloud.pipelines.flowables.PipelineOutput]] | None = None, stage_name: str | None = None, params: dict[str, PipelineParameter | luminarycloud.pipelines.arguments.PipelineArgValueType] | None = None) Callable[[Callable[Ellipsis, dict[str, Any] | None]], RunScript]

Decorator for building a RunScript stage from a Python function.

Examples

>>> @pipelines.stage(
...     inputs={"geometry": create_geo.outputs.geometry},
...     outputs={"geometry": pipelines.PipelineOutputGeometry},
... )
... def ensure_single_volume(geometry: lc.Geometry):
...     _, volumes = geometry.list_entities()
...     if len(volumes) != 1:
...         raise pipelines.StopRun("expected exactly one volume")
...     return {"geometry": geometry}
ArgNamedVariableSet

Deprecated alias for PP_NAMED_VARIABLE_SET_ID.

PP_NAMED_VARIABLE_SET_ID

A constant PipelineParameter that can be used in a PipelineArgs params list to add a Named Variable Set ID column to the args table. There must be zero or one of these in a PipelineArgs params list.

Stage