luminarycloud.vis

Classes

Box

This class defines a box used for filter such as box clip.

BoxClip

Clip the dataset using a box. Cells inside the box are kept while cells completely outside the

ColorMap

The color map allows user control over how field values are mapped to

ColorMapAppearance

ColorMapAppearance controls how the color maps appear in the image, including

DataExtractor

I extract data from solutions.

DataRange

The data range represents a range of values. Ranges are only valid if the

DirectionalCamera

Class defining a directional camera for visualization. Directional

DisplayAttributes

Display attributes specify how objects such as meshes, geometries, and

EntityType

An enum for specifying the source of an image. When listing extracts,

ExtractOutput

The extract output represents the request to extract data from a solution,

Field

The field controls the field displayed on the object. If the field doesn't

FixedSizeVectorGlyphs

Vector Glyphs is a vector field visualization techique that places arrows (e.g., glyphs),

InteractiveScene

The InteractiveScene acts as the bridge between the the Scene and

IntersectionCurve

Generate line data by computing intersections between solution surfaces and a slice plane.

Isosurface

Isosurface is used to evaluate scalar fields at constant values, known as

LookAtCamera

Class defining a look at camera for visualization. Unlike the directional

Plane

This class defines a plane.

PlaneClip

Clip the dataset using a plane. Cells in the direction of the plane normal

RakeStreamlines

Streamlines is a vector field visualization technique that integrates

RenderOutput

The render output represents the request to render images from a geometry,

ScaledVectorGlyphs

Vector Glyphs is a vector field visualization techique that places arrows

Scene

The scene class is the base for any visualization. The scene is constructed

Slice

The slice filter is used to extract a cross-section of a 3D dataset by

SurfaceLIC

A Surface Line Integral Convolution (LIC) filter is used to depict the flow

SurfaceStreamlines

Streamlines is a vector field visualization technique that integrates

Threshold

The threshold filter used to remove cells based on a data range. Cells with values

Functions

list_data_extracts(→ List[ExtractOutput])

Lists all previously created data extract associated with a project and a solution.

list_quantities(→ List[luminarycloud.enum.VisQuantity])

List the quantity types, including derived quantities, that are available in

list_renders(→ List[RenderOutput])

Lists all previously created renders associated with a project and an entity.

Package Contents

class Box

This class defines a box used for filter such as box clip.

Warning

This feature is experimental and may change or be removed in the future.

angles: luminarycloud.types.Vector3Like

The rotation of the box specified in Euler angles (degrees) and applied in XYZ ordering. Default: [0,0,0]

center: luminarycloud.types.Vector3Like

A point defined at the center of the box. Default: [0,0,0].

lengths: luminarycloud.types.Vector3Like

The the legnths of each side of the box. Default: [1,1,1]

class BoxClip(name: str)

Clip the dataset using a box. Cells inside the box are kept while cells completely outside the box are removed.

Warning

This feature is experimental and may change or be removed in the future.

get_parent_id() str

Returns the filter’s parent id. An empty string will be returned if there is no parent.

reset_parent() None

Reset the parent of this filter to the original dataset.

set_parent(filter: Any) None

Set this filter’s parent filter. This controls what data the filter uses as input. Filters can be chained into a DAG. If no parent is set, then this filter uses the original dataset.

Parameters:
filter: Filter

The filter to use as the parent.

property box: luminarycloud.vis.primitives.Box
display_attrs
id
inverted: bool = True
name
class ColorMap

The color map allows user control over how field values are mapped to colors. Color maps are assigned to fields (e.g., the quantity and component) and not individual display attributes. This means that there can only ever be one color map per field/component combination (e.g., velocity-magnitude or velocity-x). Any display attribute in the scene (i.e., filter display attributes or global display attributes) that maps to this color map will be color in the same manner.

Warning

This feature is experimental and may change or be removed in the future.

appearance: ColorMapAppearance | None = None

This attribute controls how the color map annotation appears in the image, including location, size, and visibility. When the scene is set to automatic color maps, these attributes are automatically populated unless overridden. When setting the appearance, the user is responsible for setting all values.

data_range: DataRange

An optional data range to use for the color map. The user must explicity set the data ranges. If not set explicitly, the fields global data range is used. For comparing multiple results, either with different solutions in the same simulation or with different simulations, its highly recommended that a range is provided so the color scales are the same between the resulting images. Default: is an invalid data range.

discretize: bool = False

Use discrete color bins instead of a continuous range. When True, ‘n_colors’ indicates how many discrete bins to use. Default: False.

field: Field

The field and component this color map applies to.

n_colors: int = 8

How many discrete bins to use when discretize is True. Valid n_colors values are [1, 256]. Default: 8.

preset: luminarycloud.enum.ColorMapPreset

The color map preset to use. This defines the colors used in the color map. Default is ‘JET’.

class ColorMapAppearance

ColorMapAppearance controls how the color maps appear in the image, including visibility, position and size.

The width, height, and the lower left position of the color map are specified in normalized device coordinates. These are values in the [0,1] range. For example, the lower left hand coordinate of the image is [0,0], and the top right coordinate of the image is [1,1].

Warning

This feature is experimental and may change or be removed in the future.

height: float = 0.146

The height of the color map in normalized device coordinates. Default: 0.146

lower_left_x: float = 0.8

The lower left x position of the color map in normalized device coordinates. Default: 0.8

lower_left_y: float = 0.8

The lower left y position of the color map in normalized device coordinates. Default: 0.8

text_size: int = 36

The text size for the color map legend in pixels. Default: 36

visible: bool = True

Controls if the color map is displayed or not. Default: True

width: float = 0.034

The width of the color map in normalized device coordinates. Default: 0.034

class DataExtractor(solution: luminarycloud.solution.Solution)

I extract data from solutions.

Warning

This feature is experimental and may change or be removed in the future.

add_data_extract(extract: DataExtract) None

Add a data extract.

create_extracts(name: str, description: str) ExtractOutput

Create a request to extract data from a solution.

Parameters:
namestr

A short name for the the extracts.

descriptionstr

A longer description of the extracts.

surface_ids() List[str]

Get a list of all the surface ids associated with the solution.

tag_ids() List[str]

Get a list of all the tag ids associated with the solution.

far_field_boundary_ids: List[str] = []
class DataRange

The data range represents a range of values. Ranges are only valid if the max value is greater than the or equal to the min_value. The default is invalid.

Warning

This feature is experimental and may change or be removed in the future.

is_valid() bool
max_value: float

The maximum value of the range.

min_value: float

The minimum value of the range.

class DirectionalCamera

Class defining a directional camera for visualization. Directional camera are oriented around the visible objects in the scene and will always face towards the scene.

Warning

This feature is experimental and may change or be removed in the future.

direction: luminarycloud.enum.CameraDirection

The orientation of the camera. Default: X_POSITIVE

height: int = 1024

The height of the output image in pixels. Default: 1024

label: str = ''

A user defined label to help distinguish between multiple images.

name: str = 'default directional camera'

A user defined name for the camera.

projection: luminarycloud.enum.CameraProjection

The type of projection used for the camera. Default: ORTHOGRAPHIC

width: int = 1024

The width of the output image in pixels. Default: 1024

zoom_in: float = 1.0

Zooms in from the default camera distance. Valid values are in the (0,1] range. A value of 0.5 means move the camera halfway between the default point and the object (i.e., a 2x zoom). A value of 1 means no zoom. Default: 1.0

class DisplayAttributes

Display attributes specify how objects such as meshes, geometries, and filters appear in the scene.

Warning

This feature is experimental and may change or be removed in the future.

field: Field

What field quantity/component to color by, if applicable.

opacity: float = 1.0

How opaque the object is. This is a normalized number between 0 (i.e., fully transparent) and 1 (i.e., fully opaque). Default: 1

representation: luminarycloud.enum.Representation

how the object is represented in the scene (e.g., surface, surface with edges, wireframe or points). Default: surface.

visible: bool = True

If the object is visible or not. Default: True

class EntityType

An enum for specifying the source of an image. When listing extracts, the user must specify what type of extract they are interested in. This enum is only used by the visualization code.

Warning

This feature is experimental and may change or be removed in the future.

Attributes:
SIMULATION

Specifies a similuation entity (i.e., a result).

MESH

Specifies a mesh entity.

GEOMETRY

Specifies a geometry entity.

GEOMETRY = 2
MESH = 1
SIMULATION = 0
class ExtractOutput(factory_token: luminarycloud.vis.vis_util._InternalToken)

The extract output represents the request to extract data from a solution, and is contructed by the DataExtractor class. The operation exectutes asyncronously, so the caller must check the status of the data extract. If the status is completed, then the resuling data is available for download.

Warning

This class should not be directly instantiated by users.

Warning

This feature is experimental and may change or be removed in the future.

delete() None

Delete the the extracts.

download_data() List[Tuple[List[List[str | int | float]], str]]

Downloads the resulting data into memory. This is useful for plotting data in notebooks. If that status is not complete, an error will be raised.

Returns:

A list of results for each extract added to the request. Each result is a tuple where the first entry is a in-memory csv file (List[List[Union[str, int, float]]]). The first row is the header followed by the data rows. The second entry of the tuple is the label provided by the user for the DataExtract.

Warning

This feature is experimental and may change or be removed in the future.

refresh() ExtractOutput

Refesh the status of the ExtractOutput.

Returns:
self
save_files(file_prefix: str, write_labels: bool = False) None

A helper for downloading and save resulting csv files to the file system. If that status is not complete, an error will be raised. csv_files will be of the form {file_prefix}_{index}.csv. Optionally, a file will be written containing a list of file names and image labels. Labels are an optional field in the DataExtracts.

Warning

This feature is experimental and may change or be removed in the future.

Parameters:
file_prefix: str, required

The file prefix to save the extract. A file index and ‘.csv’ will be appended to the file names.

write_labels: bool, optional

Write a json file containing a list of csv file names and labels, if True. The resulting json file is named ‘{file_prefix}.json’ Default: False

wait(interval_seconds: float = 4, timeout_seconds: float = float('inf')) luminarycloud.enum.ExtractStatusType

Wait until the ExtractOutput is completed or failed.

Parameters:
intervalfloat, optional

Number of seconds between polls.

timeoutfloat, optional

Number of seconds before timeout.

Returns:
ExtractStatusType: Current status of the image extract.
description: str = ''
name: str = ''
status: luminarycloud.enum.ExtractStatusType
class Field

The field controls the field displayed on the object. If the field doesn’t exist, we show a solid color.

Warning

This feature is experimental and may change or be removed in the future.

component: luminarycloud.enum.FieldComponent

The component of the field to use, applicable to vector fields. If the field is a scalar, the component field is ignored. Default: MAGNITUDE.

quantity: luminarycloud.enum.VisQuantity

The quantity to color by.

class FixedSizeVectorGlyphs(name: str)

Vector Glyphs is a vector field visualization techique that places arrows (e.g., glyphs), in the 3D scene that are oriented in the direction of the underlying vector field. Fixed size vector glyhs places vector annotations at sampled points in meshes that are a fixed size. This filter is only valid on vector fields. .. warning:: This feature is experimental and may change or be removed in the future.

get_parent_id() str

Returns the filter’s parent id. An empty string will be returned if there is no parent.

reset_parent() None

Reset the parent of this filter to the original dataset.

set_parent(filter: Any) None

Set this filter’s parent filter. This controls what data the filter uses as input. Filters can be chained into a DAG. If no parent is set, then this filter uses the original dataset.

Parameters:
filter: Filter

The filter to use as the parent.

display_attrs
field: luminarycloud.vis.display.Field
id
name: str
property sampling_rate: int
size: float = 1.0
class InteractiveScene(scene: luminarycloud.vis.visualization.Scene, mode: luminarycloud.enum.vis_enums.SceneMode)

The InteractiveScene acts as the bridge between the the Scene and the Jupyter widget, handling checking if we have the widget package before passing calls to the widget to handle it being an optional dependency

compare(entity: luminarycloud.geometry.Geometry | luminarycloud.mesh.Mesh | luminarycloud.solution.Solution) None
get_camera() luminarycloud.vis.visualization.LookAtCamera
reset_camera() None
set_camera(camera: luminarycloud.vis.visualization.LookAtCamera) None
set_color_map(color_map: luminarycloud.vis.visualization.ColorMap) None
set_display_attributes(object_id: str, attrs: luminarycloud.vis.display.DisplayAttributes) None
set_scene(scene: luminarycloud.vis.visualization.Scene, isComparator: bool) None
set_surface_color(surface_id: str, color: list[float]) None
set_surface_visibility(surface_id: str, visible: bool) None
set_triad_visible(visible: bool) None
widget
class IntersectionCurve(name: str)

Generate line data by computing intersections between solution surfaces and a slice plane.

Extracts 1D curves where surfaces intersect the specified cutting plane, preserving solution field values at intersection points.

Warning

This feature is experimental and may change or be removed in the future.

add_surface(id: str) None

Add a surface to compute the intersection curve on. Adding no surfaces indicates that all surfaces will be used. The id can either be a tag or explicit surface id. These values will be validated by the DataExtractor before sending the request.

Parameters:
id: str

A surface id or a tag id.

id
label: str = ''
name
property plane: luminarycloud.vis.primitives.Plane
class Isosurface(name: str)

Isosurface is used to evaluate scalar fields at constant values, known as isovalues. In volumes, isosurface produces surfaces, and in surfaces, isosurface produces lines (isolines). Isosurface is also known as contouring and as level-sets.

Warning

This feature is experimental and may change or be removed in the future.

get_parent_id() str

Returns the filter’s parent id. An empty string will be returned if there is no parent.

reset_parent() None

Reset the parent of this filter to the original dataset.

set_parent(filter: Any) None

Set this filter’s parent filter. This controls what data the filter uses as input. Filters can be chained into a DAG. If no parent is set, then this filter uses the original dataset.

Parameters:
filter: Filter

The filter to use as the parent.

display_attrs
field: luminarycloud.vis.display.Field
id
isovalues: List[float] = []
name
class LookAtCamera

Class defining a look at camera for visualization. Unlike the directional camera which is placed relative to what is visisble, the the look at camera is an explict camera, meaning that we have to fully specify the parameters.

Warning

This feature is experimental and may change or be removed in the future.

height: int = 1024

The height of the output image in pixels. Default: 1024

label: str = ''

A user defined label to help distinguish between multiple images. Default: “”

look_at: luminarycloud.types.Vector3Like

The point the camera is looking at. Default (0,0,0)

pan_x: float = 0

Pan the camera in the x direction (right). This is a world space value defined in the camera coordinate system. Pan is not typically directly set by the user, but pan is useful for reproducing camera parameters from an interactive scene where pan is used (i.e., control + middle mouse).

pan_y: float = 0

Pan the camera in the y direction (up). This is a world space value defined in the camera coordinate system. Pan is not typically directly set by the user, but pan is useful for reproducing camera parameters from an interactive scene where pan is used (i.e., control + middle mouse).

position: luminarycloud.types.Vector3Like

The position of the camera. Default (0,1,0)

projection: luminarycloud.enum.CameraProjection

The type of projection used for the camera. Default: ORTHOGRAPHIC

up: luminarycloud.types.Vector3Like

The up vector for the camera. Default (0,0,1)

width: int = 1024

The width of the output image in pixels. Default: 1024

class Plane

This class defines a plane.

Warning

This feature is experimental and may change or be removed in the future.

normal: luminarycloud.types.Vector3Like

The vector orthogonal to the plane. Default: [0,1,0]

origin: luminarycloud.types.Vector3Like

A point defined on the plane. Default: [0,0,0].

class PlaneClip(name: str)

Clip the dataset using a plane. Cells in the direction of the plane normal are kept, while the cells in the opposite direction are removed.

Warning

This feature is experimental and may change or be removed in the future.

get_parent_id() str

Returns the filter’s parent id. An empty string will be returned if there is no parent.

reset_parent() None

Reset the parent of this filter to the original dataset.

set_parent(filter: Any) None

Set this filter’s parent filter. This controls what data the filter uses as input. Filters can be chained into a DAG. If no parent is set, then this filter uses the original dataset.

Parameters:
filter: Filter

The filter to use as the parent.

display_attrs
id
inverted: bool = False
name
property plane: luminarycloud.vis.primitives.Plane
class RakeStreamlines(name: str)

Streamlines is a vector field visualization technique that integrates massless particles through a vector field forming curves. Streamlines are used to visualize and analyze fluid flow patterns (e.g., the velocity field), helping to understand how the fluid moves. Streamlines can be use used to visualize any vector field contained in the solution.

RakeStreamlines generates seed particles evenly spaced along a line defined by specified start and end points. RakeStreamlines only work with volume data.

Warning

This feature is experimental and may change or be removed in the future.

get_parent_id() str

Returns the filter’s parent id. An empty string will be returned if there is no parent.

reset_parent() None

Reset the parent of this filter to the original dataset.

set_parent(filter: Any) None

Set this filter’s parent filter. This controls what data the filter uses as input. Filters can be chained into a DAG. If no parent is set, then this filter uses the original dataset.

Parameters:
filter: Filter

The filter to use as the parent.

direction: luminarycloud.enum.StreamlineDirection
display_attrs
end: luminarycloud.types.Vector3Like
field: luminarycloud.vis.display.Field
id
max_length: float = 10
n_streamlines: int = 100
name: str
start: luminarycloud.types.Vector3Like
class RenderOutput(factory_token: luminarycloud.vis.vis_util._InternalToken)

The render output represents the request to render images from a geometry, mesh or solution, and is contructed by the Scene class. The operation exectutes asyncronously, so the caller must check the status of the image extract. If the status is completed, then the resuling image is available for download.

Warning

This class should not be directly instantiated by users.

Warning

This feature is experimental and may change or be removed in the future.

delete() None

Delete the image.

download_images() List[Tuple[io.BytesIO, str]]

Downloads the resulting jpeg images into binary buffers. This is useful for displaying images in notebooks. If that status is not complete, an error will be raised.

Returns:

List[Tuple[io.BytesIO, str]]: a list of tuples containing the binary image data and the user provided image label (camera.label).

Warning

This feature is experimental and may change or be removed in the future.

refresh() RenderOutput

Refesh the status of the RenderOutput.

Returns:
self
save_images(file_prefix: str, write_labels: bool = False) None

A helper for downloading and save resulting images to the file system. If that status is not complete, an error will be raised. Images will be of the form {file_prefix}_{index}.jpg. Optionally, a file will be written containing a list of file names and image labels. Labels are an optional field in the camera (camera.label).

Warning

This feature is experimental and may change or be removed in the future.

Parameters:
file_prefix: str, required

The file prefix to save the image. A image index and ‘.jpg’ will be appended to the file names.

write_labels: bool, optional

Write a json file containing a list of image file names and labels, if True. The resulting json file is named ‘{file_prefix}.json’ Default: False

wait(interval_seconds: float = 5, timeout_seconds: float = float('inf')) luminarycloud.enum.RenderStatusType

Wait until the RenderOutput is completed or failed.

Parameters:
intervalfloat, optional

Number of seconds between polls.

timeoutfloat, optional

Number of seconds before timeout.

Returns:
RenderStatusType: Current status of the image extract.
description: str = ''
name: str = ''
status: luminarycloud.enum.RenderStatusType
class ScaledVectorGlyphs(name: str)

Vector Glyphs is a vector field visualization techique that places arrows (e.g., glyphs), in the 3D scene that are oriented in the direction of the underlying vector field. Scaled vector glyphs changes the size of the arrows base on the magnitude of the vector. For example when visualizing the velocity field, a glyph where the magnitude is twice the magnitude of another glyph will appear twice as large. .. warning:: This feature is experimental and may change or be removed in the future.

get_parent_id() str

Returns the filter’s parent id. An empty string will be returned if there is no parent.

reset_parent() None

Reset the parent of this filter to the original dataset.

set_parent(filter: Any) None

Set this filter’s parent filter. This controls what data the filter uses as input. Filters can be chained into a DAG. If no parent is set, then this filter uses the original dataset.

Parameters:
filter: Filter

The filter to use as the parent.

display_attrs
field: luminarycloud.vis.display.Field
id
name: str
property sampling_rate: int
scale: float = 1.0
class Scene(entity: luminarycloud.geometry.Geometry | luminarycloud.mesh.Mesh | luminarycloud.solution.Solution)

The scene class is the base for any visualization. The scene is constructed with what “entity” you want to visualize: a solution, a mesh, or a geometry.

Global display attributes: The global display attributes control the default appearance of all the surfaces (i.e. boundaries). Attributes include visibitiy, what fields are displayed on the surfaces (if applicable), and representation (e.g., surface, surface with edges, …).

Individual surface visibilities can be overidden to hide/show specific surfaces. Additionally, if the scene is constructed around a simulation, a helper method is provided to automatically hide surfaces associated with far fields.

Warning

This feature is experimental and may change or be removed in the future.

add_camera(camera: DirectionalCamera | LookAtCamera) None

Add a camera to the scene. Each camera added produces an image.

add_color_map(color_map: luminarycloud.vis.display.ColorMap) None

Add a color map to the scene. If a color map with the field already exists, it will be overwritten.

add_filter(filter: luminarycloud.vis.filters.Filter) None

Add a filter to the scene. Filters not currently supported with geometries and will raise an error if added.

clone(entity: luminarycloud.geometry.Geometry | luminarycloud.mesh.Mesh | luminarycloud.solution.Solution) Scene

Clone this scene is based on a new entity. The new entity must be of the same type as the previous one. For example, you can’t swap a scene based on a geometry with a solution. This is a deep copy operation. Both entities must be compatible with one another, meaning they share tags or surfaces ids used for setting surface visibilities and some filters like surface streamlines and surface LIC.

hide_far_field() None

Hide all far fields surfaces based on simulation parameters. Will only work if the entity is a simulation, otherwise it will raise an error.

interact(scene_mode: luminarycloud.enum.SceneMode = SceneMode.SIDE_PANEL) luminarycloud.vis.interactive_scene.InteractiveScene

Start an interactive display of the scene, when running inside LuminaryCloud’s AI Notebook environment or Jupyter Lab. The returned object must be displayed in the notebook to display the interactive visualization. This requires that the luminarycloud package was installed with the optional jupyter feature.

render_images(name: str, description: str) RenderOutput

Create a request to render a images of the scene using the scene’s cameras.

Parameters:
namestr

A short name for the the renders.

descriptionstr

A longer description of the scene and renderings.

surface_ids() List[str]

Get a list of all the surface ids associated with the mesh.

surface_visibility(surface_id: str, visible: bool) None

Explicitly override the the visibility of a surface by id. When caclulating final visibilities, we first apply overrides to the global display attributes using tags, then surface ids.

tag_ids() List[str]

Get a list of all the tag ids associated with the entity.

tag_visibility(tag_id: str, visible: bool) None

Explicitly override the the visibility based on tag id. When caclulating final visibilities, we first apply overrides to the global display attributes using tags, then surface ids.

auto_color_map_annotations = True
axes_grid_visible: bool = False
background_color: luminarycloud.types.Vector3
global_display_attrs
supersampling: int = 2
triad_visible: bool = True
class Slice(name: str)

The slice filter is used to extract a cross-section of a 3D dataset by slicing it with a plane.

Warning

This feature is experimental and may change or be removed in the future.

get_parent_id() str

Returns the filter’s parent id. An empty string will be returned if there is no parent.

reset_parent() None

Reset the parent of this filter to the original dataset.

set_parent(filter: Any) None

Set this filter’s parent filter. This controls what data the filter uses as input. Filters can be chained into a DAG. If no parent is set, then this filter uses the original dataset.

Parameters:
filter: Filter

The filter to use as the parent.

display_attrs
id
name
property plane: luminarycloud.vis.primitives.Plane
property project_vectors: bool
class SurfaceLIC(name: str)

A Surface Line Integral Convolution (LIC) filter is used to depict the flow direction and structure of vector fields (such as velocity) on surfaces. It enhances the perception of complex flow patterns by convolving noise textures along streamlines, making it easier to visually interpret the behavior of fluid flow on boundaries or surfaces in a simulation.

The input is a list of surfaces. If none are specified, all are used. The surface LIC outputs grey scale colors on the specified surfaces. When the display attributes quantity is not None, the field colors are blended with the grey scale colors.

Note: surface LIC computes on the same surfaces of the solution. If the surfaces in the global display attributes are not hidden, the surface LIC will not be visible since the existing surfaces are occluding it.

Warning

This feature is experimental and may change or be removed in the future.

add_surface(id: str) None

Add a surface to compute the surface LIC on. Adding no surfaces indicates that all surfaces will be used.

Parameters:
id: str

A surface id or a tag id.

get_parent_id() str

Returns the filter’s parent id. An empty string will be returned if there is no parent.

reset_parent() None

Reset the parent of this filter to the original dataset.

set_parent(filter: Any) None

Set this filter’s parent filter. This controls what data the filter uses as input. Filters can be chained into a DAG. If no parent is set, then this filter uses the original dataset.

Parameters:
filter: Filter

The filter to use as the parent.

contrast: float = 1.0
display_attrs
field: luminarycloud.vis.display.Field
id
name
class SurfaceStreamlines(name: str)

Streamlines is a vector field visualization technique that integrates massless particles through a vector field forming curves. Streamlines are used to visualize and analyze fluid flow patterns (e.g., the velocity field), helping to understand how the fluid moves. Streamlines can be use used to visualize any vector field contained in the solution.

Surface streamlines has two different modes:
  • ADVECT_ON_SURFACE: constrain particles to the surfaces of the mesh.

  • ADVECT_IN_VOLUME: use surface points to seed volumetric streamlines.

The advection mode also effects what fields can be used. For example, velocity is zero on walls, so when useing ADVECT_ON_SURFACE use a field that has non-zero values such as wall shear stress.

Example use cases for ADVECT_IN_VOLUME:
  • placing seeds on an inlet surface and integrating in the forwared direction.

  • placing seeds on an outlet surface and integrating in the backward direction.

  • placing seeds on the tires of a car or on the wing of an airplane.

Example use cases for ADVECT_ON_SURFACE:
  • Understanding forces on walls such as wall shear stress.

Warning

This feature is experimental and may change or be removed in the future.

add_surface(id: str) None

Add a surface to generate seed points from.

Parameters:
id: str

A surface id or a tag id.

get_parent_id() str

Returns the filter’s parent id. An empty string will be returned if there is no parent.

reset_parent() None

Reset the parent of this filter to the original dataset.

set_parent(filter: Any) None

Set this filter’s parent filter. This controls what data the filter uses as input. Filters can be chained into a DAG. If no parent is set, then this filter uses the original dataset.

Parameters:
filter: Filter

The filter to use as the parent.

direction: luminarycloud.enum.StreamlineDirection
display_attrs
field: luminarycloud.vis.display.Field
id
max_length: float = 10
mode: luminarycloud.enum.SurfaceStreamlineMode
n_streamlines: int = 100
name: str
offset: float = 0.0
sampling_rate = 100
class Threshold(name: str)

The threshold filter used to remove cells based on a data range. Cells with values within the range (i.e., min_value and max_value), are kept. All other cells are removed.

Warning

This feature is experimental and may change or be removed in the future.

get_parent_id() str

Returns the filter’s parent id. An empty string will be returned if there is no parent.

reset_parent() None

Reset the parent of this filter to the original dataset.

set_parent(filter: Any) None

Set this filter’s parent filter. This controls what data the filter uses as input. Filters can be chained into a DAG. If no parent is set, then this filter uses the original dataset.

Parameters:
filter: Filter

The filter to use as the parent.

display_attrs
field: luminarycloud.vis.display.Field
id
invert: bool = False
max_value: float = 1.0
min_value: float = 0.0
name: str
smooth: bool = False
strict: bool = False
list_data_extracts(solution: luminarycloud.solution.Solution) List[ExtractOutput]

Lists all previously created data extract associated with a project and a solution.

Warning

This feature is experimental and may change or be removed in the future.

Parameters:
project_idstr

The project id to query.

entityGeometry | Mesh | Solution

Specifies what types of rendering extracts to list(e.g., geometry, mesh or solution).

list_quantities(solution: luminarycloud.solution.Solution) List[luminarycloud.enum.VisQuantity]

List the quantity types, including derived quantities, that are available in a solution.

Warning

This feature is experimental and may change or be removed in the future.

Parameters:
solution: Solution

The the solution object to query.

list_renders(entity: luminarycloud.geometry.Geometry | luminarycloud.mesh.Mesh | luminarycloud.solution.Solution) List[RenderOutput]

Lists all previously created renders associated with a project and an entity.

Warning

This feature is experimental and may change or be removed in the future.

Parameters:
project_idstr

The project id to query.

entityGeometry | Mesh | Solution

Specifies what types of rendering extracts to list(e.g., geometry, mesh or solution).