| |
- Boost.Python.enum(builtins.int)
-
- blockscaletype
- compressionmethod
- derivativeorder
- detrendmode
- ewma_type
- filter_scaletype
- filtertype
- hierarchical_types
- modeltype
- operator
- phase_iteration_treatment
- polynomialorder
- powerexponent
- prediction_source
- scaletype
- transformtype
- waveletfunctiontype
- ytreatment
- Boost.Python.instance(builtins.object)
-
- BatchLevelCreator
- DataCollection
-
- Dataset
- DatasetInfo
- Filter
- ModelInfo
- ModelOptions
- PhaseCropper
- Predictionset
- Project
- ProjectData
- ProjectDataBuilder
- ProjectHandler
- ProjectOptions
- UmMat
- UmVec
- ValueIDs
- Workset
class BatchLevelCreator(Boost.Python.instance) |
|
Handles specification and creation of batch level datasets.
To create batch level datasets, specify which types of datasets to create
by setting the dataset_types property. Optionally add or remove variables/scores etc
and call apply(). After apply has been called, the object should not be used anymore. |
|
- Method resolution order:
- BatchLevelCreator
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- __init__( (object)self, (Project)project, (int)model_number) -> None :
Constructs a BatchLevelCreator that can create batch level datasets in the specified
project using the specified model or model group.
If the model is a mother model (batch evolution model, BEM) all models in the group will be used.
If the model is a model in a BEM, all other models in the same BEM will also be used.
- __reduce__ = (...)
- apply(...)
- apply( (BatchLevelCreator)arg1) -> list :
Creates the specified datasets.
After calling apply, the object should not be used anymore.
Returns a list containing the numbers of the created datasets.
- exists(...)
- exists( (BatchLevelCreator)self, (DatasetType)dataset_type) -> bool :
Returns true if a dataset of the specified type has already been created.
Only one dataset of each type can be created per batch evolution model.
- get_available_variables(...)
- get_available_variables( (BatchLevelCreator)self, (int)phase_index) -> list :
Returns the names of all variables that are available to use for summary statistics
and raw data batch level datasets for the given phase.
phase_index --- The phase for which variables are to be returned.
- get_included_variables(...)
- get_included_variables( (BatchLevelCreator)self, (int)phase_index) -> list :
Returns the names of the currently included variables.
These are the variables that will be used to create summary statistics
and raw data batch level datasets.
The default is to include all variables that has been included in the corresponding phase model.
phase_index --- The index of the phase for which the variable names are returned.
The first phase has index 0.
- get_used_models(...)
- get_used_models( (BatchLevelCreator)self) -> list :
Returns a list containing a ModelInfo for each of the models that are used
to create the batch level datasets.
Note that if the CreateBatchProject was created to use a model group or one model
in a model group, the list contains all models in the group
- set_components(...)
- set_components( (BatchLevelCreator)self, (int)phase_index, (object)components) -> None :
Sets the components to use when creating batch level datasets of type DatasetType.Scores.
The number of components available for each phase can be obtained from the ModelInfo
returned by get_used_models.
The default is to use all components.
phase_index --- The index of the phase.
components --- A list of the components to use.
- set_variables(...)
- set_variables( (BatchLevelCreator)self, (int)phase_index, (object)variable_names) -> None :
Sets the variables to use when creating summary statistics and raw data batch level datasets.
The default is to use all available variables.
phase_index --- The index of the phase for which the variables are to be used.
variable_names --- The names of the variables to use.
Static methods defined here:
- create_BL_datasets_from_unused_batches(...)
- create_BL_datasets_from_unused_batches( (int)model_number, (Project)project) -> list :
Creates additional batch level datasets from all batches that has not already been used
in batch level datasets.
The new datasets are created with the same settings as the existing ones.
model_number --- The number of a batch evolution model that is used to create the batch level datasets.
If the model is a mother model (batch evolution model, BEM)
all models in the group will be used.
If the model is a model in a BEM, all other models in the same BEM will also be used.
Returns a list containing the numbers of the created datasets.
Data descriptors defined here:
- dataset_types
- The types of the dataset(s) that are to be created.
This is one or more of the DatasetType enums ORed together.
Default is DatasetType.RawData | DatasetType.DurationEndPoint
- statistics_summary_types
- The types of statistics which are to be used when creating batch level datasets of type DatasetType.Statistics.
One or more of the StatisticsSummaryTypes enums ORed together.
Default is StatisticsSummaryTypes.Mean | StatisticsSummaryTypes.StdDev
Data and other attributes defined here:
- DatasetType = <class 'umetrics.simca.DatasetType'>
- The different types of batch level datasets.
Only one dataset of each type can be created per batch evolution model.
DurationEndPoint --- Contains duration and end points for each phase.
Scores --- Contains scores from the batch evolution models (not available for OPLS batch models).
RawData --- Contains raw data.
Statistics --- Contains summary statistics of raw data (mean, standard deviation etc)
- StatisticsSummaryTypes = <class 'umetrics.simca.StatisticsSummaryTypes'>
- The different types of statistics that can be used with batch level datasets of type DatasetType.Statistics
Min --- The minimum value.
Max --- The maximum value.
Mean --- The mean.
Median --- The median.
StdDev --- The standard deviation.
RobustStdDev --- The robust standard deviation (interquartile range divided by 1.075).
Interquartile --- Interquartile range (the first interquartile minus the third)
Slope --- A variables slope.
- __instance_size__ = 208
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class DataCollection(Boost.Python.instance) |
|
Defines the complete data used by Datasets, prediction sets and work sets. |
|
- Method resolution order:
- DataCollection
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- compare_with(...)
- compare_with( (DataCollection)arg1, (DataCollection)arg2, (float)arg3) -> None :
Compares two DataCollections and raises an umetrics.UmetricsException if they differ.
This is primarily intended for internal use at Umetrics.
precision is the precision of the comparison of floating point numbers.
precision = 0 => The numbers must be exactly equal.
precision = 1E-3 => The numbers can not differ in the third digit
precision = 1 => The numbers can not differ in the first digit
- get_data(...)
- get_data( (DataCollection)self) -> UmMat :
Returns a matrix with all values in the data collection.
- get_datasets(...)
- get_datasets( (DataCollection)self) -> list :
Returns a list of all datasets that is included in this data collection.
- get_obs_aliases(...)
- get_obs_aliases( (DataCollection)self) -> list :
Returns a list of the names of the observation aliases used in this data collection.
- get_obs_names(...)
- get_obs_names( (DataCollection)self [, (int)alias=0]) -> list :
Returns the observation names in the datasets used by this model.
alias --- the index of the desired observation alias (default 0 for primary ID)
- get_string_values(...)
- get_string_values( (DataCollection)self, (object)variable) -> list :
Returns the values of a variable as strings, used for qualitative and time variables.
variable --- the index or name of the desired variable (default 0 for the first)
- get_var_aliases(...)
- get_var_aliases( (DataCollection)self) -> list :
Returns a list of the names of the variable aliases used in this data collection.
- get_var_names(...)
- get_var_names( (DataCollection)self [, (int)alias=0]) -> list :
Returns the variable names in the datasets used by this model.
alias --- the index of the desired variable alias (default 0 for primary ID)
- is_qualitative(...)
- is_qualitative( (DataCollection)self, (object)variable) -> bool :
Returns true if the variable is a time variable.
variable --- the index or name of the desired variable (default 0 for the first)
- is_time(...)
- is_time( (DataCollection)self, (object)variable) -> bool :
Returns true if the variable is a time variable.
variable --- the index or name of the desired variable (default 0 for the first)
- size_observations(...)
- size_observations( (DataCollection)self) -> int :
Returns number of observations in the data collection.
- size_variables(...)
- size_variables( (DataCollection)self) -> int :
Returns number of variables in the data collection.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class Dataset(DataCollection) |
|
Represents a SIMCA dataset. |
|
- Method resolution order:
- Dataset
- DataCollection
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- add_variable(...)
- add_variable( (Dataset)self, (object)name, (object)values [, (object)identifiers=None]) -> str :
Adds a variable to the dataset.
name --- The name of the new variable. It can also be a list of strings where the first
will be used as the primary variable ID and subsequent strings as secondary
variable IDs.
values --- Can be a list of numbers (which will create a quantitative variable) or
a list of strings (which will create a qualitative variable).
It can also be a dictionary that maps observation identifiers (observation index or names)
to values (which can be numbers or strings). If it is a list and no identifiers are supplied it must be the same length as
the number of observations in the dataset.
identifiers --- A list of identifiers (indexes or observation names) that is used to identify
what observation each value in 'values' belongs to.
This can be omitted if 'values' is a dictionary or the length of 'values' is
the same as the number of observations in the dataset.
Returns the primary identifier (name) of the new variable. Note that this will not be the same
as the 'name' argument if the project already contains a variable with that name.
Methods inherited from DataCollection:
- compare_with(...)
- compare_with( (DataCollection)arg1, (DataCollection)arg2, (float)arg3) -> None :
Compares two DataCollections and raises an umetrics.UmetricsException if they differ.
This is primarily intended for internal use at Umetrics.
precision is the precision of the comparison of floating point numbers.
precision = 0 => The numbers must be exactly equal.
precision = 1E-3 => The numbers can not differ in the third digit
precision = 1 => The numbers can not differ in the first digit
- get_data(...)
- get_data( (DataCollection)self) -> UmMat :
Returns a matrix with all values in the data collection.
- get_datasets(...)
- get_datasets( (DataCollection)self) -> list :
Returns a list of all datasets that is included in this data collection.
- get_obs_aliases(...)
- get_obs_aliases( (DataCollection)self) -> list :
Returns a list of the names of the observation aliases used in this data collection.
- get_obs_names(...)
- get_obs_names( (DataCollection)self [, (int)alias=0]) -> list :
Returns the observation names in the datasets used by this model.
alias --- the index of the desired observation alias (default 0 for primary ID)
- get_string_values(...)
- get_string_values( (DataCollection)self, (object)variable) -> list :
Returns the values of a variable as strings, used for qualitative and time variables.
variable --- the index or name of the desired variable (default 0 for the first)
- get_var_aliases(...)
- get_var_aliases( (DataCollection)self) -> list :
Returns a list of the names of the variable aliases used in this data collection.
- get_var_names(...)
- get_var_names( (DataCollection)self [, (int)alias=0]) -> list :
Returns the variable names in the datasets used by this model.
alias --- the index of the desired variable alias (default 0 for primary ID)
- is_qualitative(...)
- is_qualitative( (DataCollection)self, (object)variable) -> bool :
Returns true if the variable is a time variable.
variable --- the index or name of the desired variable (default 0 for the first)
- is_time(...)
- is_time( (DataCollection)self, (object)variable) -> bool :
Returns true if the variable is a time variable.
variable --- the index or name of the desired variable (default 0 for the first)
- size_observations(...)
- size_observations( (DataCollection)self) -> int :
Returns number of observations in the data collection.
- size_variables(...)
- size_variables( (DataCollection)self) -> int :
Returns number of variables in the data collection.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class DatasetInfo(Boost.Python.instance) |
|
Contains information about a dataset. |
|
- Method resolution order:
- DatasetInfo
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- __repr__(...)
- __repr__( (DatasetInfo)self) -> str
- __str__(...)
- __str__( (DatasetInfo)self) -> str
Data descriptors defined here:
- ID
- A number that uniquely identifies the dataset
- Yvariables
- The number of Y variables in the dataset
- name
- The name of the dataset
- observations
- The number of observations in the dataset.
- type
- The type of the dataset. This is a a combination of the DatasetType enums
- variables
- The number of variables
Data and other attributes defined here:
- DatasetType = <class 'umetrics.simca.DatasetType'>
- Different dataset types
A dataset type can be a combination of these ORed together.
regular --- Not a batch level, spectral or hierarchical dataset
spectral --- Contains spectral filtered data,
can be combined with batchevolution and batchlevel
hierarchical --- Contains hierarchical data,
can be combined with batchevolution and batchlevel
plsda --- Contains Y variables used with PLS - DA models,
can be combined with batchevolution and batchlevel
batchlevel --- Batch level dataset.
batchevolution --- Batch evolution level dataset.
batchcondition --- A batch level dataset that only contain imported data.
lagdistance --- Contains distance variables used for dynamic lags.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class Filter(Boost.Python.instance) |
|
Used to add different filtering options.
After finished filtering, a new filtered dataset is created.
All original variables is unchanged in the original dataset.
Apply several filters in combination by using add_filter_type method.
The filters can be applied in any order although each filter can be applied only once.
To finish filtering and create a dataset, call apply_filter. |
|
- Method resolution order:
- Filter
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- add_filter_type(...)
- add_filter_type( (Filter)self, (filtertype)filtertype) -> None :
Add a specific filter.
filtertype --- The filter type to use and change settings for.
See filtertype enum for different options.
- apply_filter(...)
- apply_filter( (Filter)self, (str)datasetname) -> None :
After a completed filtering, a new filtered dataset is created.
datasetname --- The name of the new dataset that will be created
or the observation names.
- exclude_observations(...)
- exclude_observations( (Filter)self, (object)observations) -> None :
excludes observations from use in the filter.
observations --- Contains the IDs or index of the observations that should be excluded.
The parameter can either be a list of indices or the observation names.
- exclude_variables(...)
- exclude_variables( (Filter)self, (object)variables) -> None :
excludes variables from use in the filter.
variables --- Contains the IDs or index of the variables that should be excluded.
The parameter can either be a list of indices or the variable names.
- get_available_wavelet_coefficients(...)
- get_available_wavelet_coefficients( (Filter)self) -> int :
Returns the total number of available wavelet coefficients when energy is retained by variance,
see set_energy_retained_by_variance which must have been called first
- get_detail_level_energy(...)
- get_detail_level_energy( (Filter)self, (int)detail_level_index) -> float :
Returns the retained energy for a specific detail index when energy is retained by details,
see set_energy_retained_by_detail which must have been called first
detail_level_index --- the detail index to get information from, between 0 and (get_num_detail_levels - 1)
- get_detail_level_size(...)
- get_detail_level_size( (Filter)self, (int)detail_level_index) -> int :
Returns the number of coefficients for a specific detail index when energy is retained by details,
see set_energy_retained_by_detail which must have been called first
detail_level_index --- the detail index to get information from, between 0 and (get_num_detail_levels - 1)
- get_num_detail_levels(...)
- get_num_detail_levels( (Filter)self) -> int :
Returns the total available number of detail for wavelet filters when
energy is retained by details, see set_energy_retained_by_detail which must have been called first
- get_osc_angle_ss(...)
- get_osc_angle_ss( (Filter)self) -> float :
returns the angle after the last component in an an OSC filter,
- get_osc_eigenvalue(...)
- get_osc_eigenvalue( (Filter)self) -> float :
returns the eigenvalue after the last component in an an OSC filter,
- get_osc_remaining_ss(...)
- get_osc_remaining_ss( (Filter)self) -> float :
returns the remaining sum of square after the last component in an an OSC filter,
- get_retained_energy(...)
- get_retained_energy( (Filter)self) -> float :
Returns the % energy retained with the current settings for wavelet filters
set_energy_retained_by_detail or set_energy_retained_by_variance must have been called first
- set_derivative_order(...)
- set_derivative_order( (Filter)self, (derivativeorder)derivativeorder) -> None :
Sets the derivative order. Default set to firstderivative.
derivativeorder --- See derivativeorder enum for different options.
- set_detrend(...)
- set_detrend( (Filter)self, (detrendmode)detrendmode) -> None :
Set the detrend mode for wavelet filter.
detrendmode --- See detrendmode enum for different options.
- set_distance_between_points(...)
- set_distance_between_points( (Filter)self, (float)distancepoints) -> None :
Sets the distance between each point. .
distancepoints --- The distance between each point. Default set to 1.
- set_energy_retained_by_detail(...)
- set_energy_retained_by_detail( (Filter)self) -> None :
Sets the energy retained by detail for wavelet filter.
- set_energy_retained_by_variance(...)
- set_energy_retained_by_variance( (Filter)self, (compressionmethod)compressionmethod) -> None :
Set the energy retained by variance for wavelet filter.
compressionmethod --- See compressionmethod enum for different options.
- set_ewma_type(...)
- set_ewma_type( (Filter)self, (ewma_type)ewma_type) -> None :
Sets the type of EWMA (filter or predictive)
ewma_type --- the EWMA type, filter or predictive. Default is filter.
- set_lambda(...)
- set_lambda( (Filter)self, (float)lambda) -> None :
Sets the distance between each point.
The lambda value must be between 0 and 1.
Default set to missing.
lambda --- The lambda value.
- set_observations(...)
- set_observations( (Filter)self, (object)observations) -> None :
Set the observations to include in the new dataset.
observations --- Contains the IDs or index of the observations that should be included.
The parameter can either be a list of indices or the observation names.
- set_osc_components(...)
- set_osc_components( (Filter)arg1, (int)self, num_components) -> None :
sets the number of components to use in an OSC filter,
num_components --- the desired number of components <=1
- set_plugin(...)
- set_plugin( (Filter)self, (str)pluginname) -> None :
set the plugin name to use
pluginname --- the name of the plugin
- set_polynomial_order(...)
- set_polynomial_order( (Filter)self, (polynomialorder)polynomialorder) -> None :
Sets the polynomial order. Default set to quadratic.
polynomialorder --- See polynomialorder enum for different options.
- set_scaling(...)
- set_scaling( (Filter)self, (object)variables, (filter_scaletype)filter_scaletype) -> None :
Set the variables to include in the new dataset.
Should be called after set_variables and set_y_variables.
variables --- Contains the IDs or index of the variables that should be scaled.
The parameter can either be a list of indices or the variable names.filter_scaletype --- See filter_scaletype enum for different options.
- set_submodel_points(...)
- set_submodel_points( (Filter)self, (int)submodelpoints) -> None :
Sets the number of points in each sub-model.
This number is default 15 and has to be odd and >= 5.submodelpoints --- The number of points in each sub-model.
- set_target_variable(...)
- set_target_variable( (Filter)self, (object)variable) -> None :
set the target variable used by time series filters
variable --- The name or index of the target variable
- set_transformation(...)
- set_transformation( (Filter)self, (object)variables, (transformtype)transformtype, (powerexponent)powerexponent, (float)constant1, (float)constant2) -> None :
Set the variables to transform.
Should be called after set_variables and set_y_variables.
variables --- Contains the IDs or index of the variables that should be transformed.
The parameter can either be a list of indices or the variable names.
- set_variables(...)
- set_variables( (Filter)self, (object)variables) -> None :
Set the variables to include in the new dataset.
variables --- Contains the IDs or index of the variables that should be included.
The parameter can either be a list of indices or the variable names.
- set_wavelet_coefficients(...)
- set_wavelet_coefficients( (Filter)arg1, (int)self, num_coefficients) -> None :
sets the number of available wavelet coefficients to use when energy is retained by variance,
see set_energy_retained_by_variance which must have been called first
num_coefficients --- the desired number of wavelet coefficients <=2
- set_wavelet_details(...)
- set_wavelet_details( (Filter)arg1, (list)self, detail_inices) -> None :
set which detail indices to use in a wavelet filter when energy is retained by details,
see set_energy_retained_by_detail which must have been called first
detail_inices --- a list containing desired detail indexes between 1 and [get_num_detail_levels]
- set_waveletfunction(...)
- set_waveletfunction( (Filter)self, (waveletfunctiontype)waveletfunctiontype [, (int)order=1]) -> None :
Sets the wavelet function type and order for wavelet filter.
waveletfunctiontype --- See waveletfunctiontype enum for different options.
order --- See detrendmode enum for valid orders for different waveletfunctiontype.
- set_wdts_decimation(...)
- set_wdts_decimation( (Filter)self, (int)decimation) -> None :
Set decimation to use for WDTS, wavelet denoising time series filter, the resulting dataset will include only
every n't observation where n is the decimation specified.
should be called after set_observations
decimation --- the decimation to use have to be a power of 2, ex. 1, 2, 4, 8, or 16. with maximum value the number of observation / 2 rounded down to a power of 2.
- set_y_variables(...)
- set_y_variables( (Filter)self, (object)variables) -> None :
Set the Y variables to use in the filter.
variables --- Contains the IDs or index of the variables that should be included.
The parameter can either be a list of indices or the variable names.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class ModelInfo(Boost.Python.instance) |
|
Contains information about a model. |
|
- Method resolution order:
- ModelInfo
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- __repr__(...)
- __repr__( (ModelInfo)self) -> str
- __str__(...)
- __str__( (ModelInfo)self) -> str
- set_description(...)
- set_description( (ModelInfo)self, (str)title) -> None :
Sets the models title.
Data descriptors defined here:
- Xvariables
- The number of X variables.
- Yvariables
- The number of Y variables.
- children
- The number of the child models. An empty list if the model doesn't have children.
- components
- The number of components.
For OPLS and O2PLS models this is the number of predictive components.
- created
- A datetime object representing when the model was created in UTC.
- description
- The models description.
- expanded
- The number of expanded terms.
- isfitted
- True if the model is fitted.
- lagged
- The number of lagged terms.
- mother
- The number of the mother model. Zero if there is no mother model.
- name
- The model name.
- number
- The model number. If this number is negative the model is a mother to other models (a model group).
- observations
- The number of observations.
- orthogonalinX
- The number of X-block orthogonal components.
Always zero for non OPLS/O2PLS models.
- orthogonalinY
- The number of Y-block orthogonal components.
Always zero for non OPLS/O2PLS models.
- type
- The type of the model.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class ModelOptions(Boost.Python.instance) |
|
Various options that effects coefficient types, DmodX etc. |
|
- Method resolution order:
- ModelOptions
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- __str__(...)
- __str__( (ModelOptions)self) -> str
- apply(...)
- apply( (ModelOptions)arg1) -> None :
Sets the options to the model.
Data descriptors defined here:
- adjusted_R2
- If True, R2 is adjusted by the degrees of freedom.
- coefficient_type
- The type of coefficients that will be used by default.
One of the coefficient_types enum.
- limits
- A dictionary containing alarm limits for this model.
limits property is a dictionary which you can set and get alarm limits.
Available types are : var, t, DModX, DModX+, PModX, PModX+ and T2Range.
For each type of vector there are target, lolo, lo, hi and hihi limits.
Each alarm type can have triggers to raise an alarm.
Example 1 : Setting a score vector alarm with limits. Extra argument is to set
which component the alarm is for.
limits = [{'type' : 't', 'comp' : 1, 'limits' : { 'lolo' : -4, 'lo' : -3, 'hi' : 3, 'hihi' : 4 }]
Example 2 : Setting a variable vector alarm with limits. Extra argument is 'name' to specify
the variable name to set the alarm for.
limits = [{'type' : 'var', 'name' : 'Agitation speed', 'limits' : { 'lolo' : -100, 'target' : 100, 'hihi' : 200 }]
Example 3 : Setting a DModX vector alarm.
limits = [{'type' : 'DModX', 'limits' : { 'hihi' : 20 }]
Example 4 : Setting a T2Range vector alarm. Extra argument 'comp' is used for setting the component range. The 'comp'
argument is not used for OPLS models.
limits = [{'type' : 'T2Range', 'comp' : { 'from' : 1, 'to' : 4 }, 'limits' : { 'hihi' : 20 }]
Adding triggers is done by adding a 'triggerLogic' member to the alarm.
limits[0]['triggerLogic'] = { 'numObservations' : 3, 'missing' : True, 'numObsInWindow' : 8, 'sizeWindow' : 100 }
Available members are 'numObservations' which is how many consecutive observations
that can be outside the limits before triggering alarm. 'missing' is used when a missing observation should trigger alarms.
'numObsInWindow' and 'sizeWindow' fields are used for how many observations that can be outside limits withing a window before triggering
an alarm. This is a moving window across all observations. The settings 'numObsInWindow' : 8 and 'sizeWindow' : 100
means that an alarm will be triggered if there are more than 8 observations in a window of 100 observations.
- normalize_DModX
- If True, DModX is normalized in units of standard deviation.
- parameter_confidence_level
- The confidence level for model parameters (coefficients, VIP etc).
Must be less than 1 and greater than 0.
- resolve_hierarchical_coefficients
- If True, top level hierarchical coefficients is resolved into the original variables.
- scale_predictions
- If True, predictions are in the scaled metric.
Scaled predictions also implies transformed predictions (see above).
- significance_for_DModX_T2
- The significance level used for DModX and Hotellings T2.
Must be less than 1 and larger than 0.
- standardized_residuals
- If True, residuals are standardized in units of standard deviations.
- transform_predictions
- If True, predictions is in the transformed metric.
Untransformed predictions also implies unscaled predictions (see below).
- trim_predictions
- If true, predictions will be trimmed as the workset.
- weighted_DModX
- If True, DModX is weighted by the modeling power of each variable.
- zero_T2_range_mean
- This value should only be set to True if the model is to be used to
create lead-centered onion designs in MODDE.
If True, 0 will be used as the T center value when calculating T2Range,
otherwise, the arithmetic mean will be used.
Data and other attributes defined here:
- coefficient_types = <class 'umetrics.simca.coefficient_types'>
- One of:
scaled_and_centered --- Coefficients for scaled and centered X and scaled Y
MLR --- Coefficients for scaled and centered X and unscaled and uncentered Y
unscaled --- Coefficients for both X and Y in the original metric (unscaled, uncentered).
rotated --- Coefficients rotated to correspond as much as possible to pure profiles.
See the statistical appendix in SIMCAs help.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class PhaseCropper(Boost.Python.instance) |
|
Defines trimming for phases in a workset. |
|
- Method resolution order:
- PhaseCropper
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- crop_beginning(...)
- crop_beginning( (PhaseCropper)self, (int)size) -> None :
Set how many observations at the beginning of all batches that should be removed.
size --- Number of observations to remove.
- crop_end(...)
- crop_end( (PhaseCropper)self, (int)size) -> None :
Set how many observations at the end of all batches that should be removed.
size --- Number of observations to remove.
- crop_variable(...)
- crop_variable( (PhaseCropper)self, (object)variable, (operator)operation, (float)firstlimit [, (float)secondlimit=0]) -> None :
Crop batches according to .
variable --- The name or the index of the variable to crop.
operation --- The name or the index of the variable to crop.
See operator enum for different values.
firstlimit --- The first limit to compare with.
secondlimit --- The second limit to compare with. Only valid for
- down_size(...)
- down_size( (PhaseCropper)self, (int)interval) -> None :
Remove every Nth observation.
interval --- How often an observation should be removed.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class Predictionset(Boost.Python.instance) |
|
Defines observations that can be saved as a predictionset. |
|
- Method resolution order:
- Predictionset
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- add_source(...)
- Adds observations to the predictionset from the given source
for example a dataset or a class.
source --- The source to use. See prediction_source enum for more information.
name --- The name or the index to use for different sources:
Workset: "name" is ignored, set to 0
Complement: "name" is ignored, set to 0
Class: "name" is a class number or name of the class
Dataset: "name" is a dataset number or name of the dataset
Predictionset:"name" is the name of the predictionset
observations --- A list of names of the observations that should be included
or an empty list to include all observations.
- as_class(...)
- Creates a prediction set with the observations from a class in the workset
class --- The name or number of the class to use
- as_complement(...)
- Creates a prediction set with the complement observations as in the workset
or if it's a batch evolution model, complement batches
- as_dataset(...)
- Creates a prediction set with the observations from a dataset
dataset --- The name or number of the dataset to use
- as_workset(...)
- Creates a prediction set with the same observations as in the workset
- save(...)
- Save the prediction set to be able to use it for predictions
name --- The name of the prediction set. If the name exist already,
then that prediction set will be replaced
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class Project(Boost.Python.instance) |
|
The interface to SIMCA projects.
---- Project life time ----
Projects differ from ordinary Python objects because it is not the Python runtime that
'owns' them, the C++ code in the embedding application does.
Projects can be closed by the application which leaves corresponding Python Project objects 'empty'.
Also, the embedding application might not close projects even if a Python program tries to.
To handle this, the scope of Python Projects should be restricted as much as possible
and be disposed of when they are no longer needed.The following design is recommended:
>>> with ProjectHandler.create_project(file_path) as project:
... if project:
... # Do stuff with project.
>>> # The Python object will always be deleted. The underlying C++ object might not.
If projects are cached, check 'is_open' to see if the project is still open. |
|
- Method resolution order:
- Project
- Boost.Python.instance
- builtins.object
Methods defined here:
- __bool__(...)
- __bool__( (Project)Self) -> bool
- __enter__(...)
- __enter__( (Project)self) -> Project
- __eq__(...)
- __eq__( (Project)arg1, (Project)arg2) -> object
- __exit__(...)
- __exit__( (Project)self, (object)type, (object)value, (object)traceback) -> None :
Closes the project if it isn't the active project
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- create_dataset(...)
- create_dataset( (Project)self, (ImportData)data, (str)datasetname) -> int :
Create a new dataset from the specified data. Returns the number of the new dataset.
data --- The data to create the dataset from
datasetname --- The unique name of the dataset
- create_filter(...)
- create_filter( (Project)self [, (object)dataset=None]) -> Filter :
Returns a new filter object.
dataset --- The dataset to use for filtering.
This can be a dataset number or name.
- create_predictionset(...)
- create_predictionset( (Project)self, (int)model) -> Predictionset :
Creates a new predictionset from the given model.
model --- Number of the model
- create_workset(...)
- create_workset( (Project)self [, (object)datasets=[] [, (object)phase_iterations=[]]]) -> Workset :
Returns a new default Workset.
datasets --- Contains the IDs or names of the dataset(s) that the new workset will use.
If this argument is an empty list,
the default datasets will be used. If no default has been specified,
the first dataset will be used
phase_iterations, see phase_iteration_treatment a list of treatments for the phase iterations for batch level models.
- data_builder(...)
- data_builder( (Project)self) -> ProjectDataBuilder :
Create a new project data builder object.
- delete_dataset(...)
- delete_dataset( (Project)self, (object)dataset) -> None :
Delete a dataset and all models depending on the dataset will be removed.
Note that dataset 1 can not be deleted.
dataset --- The dataset to delete
This can be dataset number or name
- delete_model(...)
- delete_model( (Project)self, (int)model) -> None :
Deletes the indicated model.
If the model is a mother model, all models in the model group will be deleted.
model --- Number of the model
- delete_predictionset(...)
- delete_predictionset( (Project)self, (str)name) -> None :
Delete a predictionset.
name --- Name of the predictionset to delete
- edit_workset(...)
- edit_workset( (Project)self, (int)model) -> Workset :
Returns the workset of the selected model.
model --- the number of the model to edit
- fit_model(...)
- fit_model( (Project)self, (int)model [, (int)numcomp=-1 [, (int)mincomp=-1 [, (int)maxcomp=-1 [, (int)numorthX=0 [, (int)numorthY=0 [, (int)numpcaX=0 [, (int)numpcaY=0]]]]]]]) -> tuple :
Fit the model. If no parameters are specified, then it will
generate best number of components.
Returns number of predictive, X orthogonal and Y orthogonal components.
model --- The number of the model to fit.
- generate_variables(...)
- generate_variables( (Project)self, (object)dataset, (str)formula, (object)names) -> None :
Generates new variables in the supplied dataset
dataset --- Contains the number or name of the dataset to generate variables in.
formula --- The recipe for the new variable(s)
names --- List of name(s) of the new variables, the length must match the number of generated variables
- get_active_model(...)
- get_active_model( (Project)arg1) -> int :
Returns the number of the active model.
A negative number indicates that the model is a mother model and a whole model group is active.
A return value of zero indicates that there are no active models.
- get_dataset(...)
- get_dataset( (Project)self, (int)number) -> Dataset :
Returns the dataset
number --- The number of the dataset
- get_dataset_infos(...)
- get_dataset_infos( (Project)self) -> list :
Returns a list containing a DatasetInfo for each dataset in the project.
- get_group_model(...)
- get_group_model( (Project)self, (int)model) -> str :
Returns the name of the group model for a batch or class model
or empty string if the model doesn't belong to any group.
model --- The number of the model to get group model from.
- get_hierarchical_base_type(...)
- get_hierarchical_base_type( (Project)self, (int)model) -> list :
Returns a list with the hierarchical types exported for the model.
see hierarchical_types for the types returned
model --- Number of the model
- get_model_infos(...)
- get_model_infos( (Project)self) -> list :
Returns a list containing a ModelInfo for each model in the project.
- get_model_options(...)
- get_model_options( (Project)self [, (int)model_number=-1]) -> ModelOptions :
Returns the options for the specified model.
- get_name(...)
- get_name( (Project)self) -> str :
Get project name.
- get_predictionset_data(...)
- get_predictionset_data( (Project)self, (str)name) -> DataCollection :
Get a data collection object of what is included in the predictionset.
name --- Name of the predictionset to get data for
- get_predictionsets(...)
- get_predictionsets( (Project)self) -> list :
Returns a list containing the names of all predictionsets.
- get_project_options(...)
- get_project_options( (Project)self) -> ProjectOptions :
Returns the projects options.
- is_model_fitted(...)
- is_model_fitted( (Project)self, (int)model) -> bool :
Returns true if a model is fitted
model --- The number of the model to test.
- merge_datasets(...)
- merge_datasets( (Project)self, (object)destination, (object)source) -> None :
Merge two datasets and the result will be in the destination dataset.
The source dataset will be deleted and all models depending on the source
dataset will be removed.
destination --- The dataset where that will contain the merged result.
This can be dataset number or name
source --- The source dataset to merge with the destination dataset.
This dataset will be removed after merging
This can be dataset number or name
- new_as_workset(...)
- new_as_workset( (Project)self, (int)model) -> Workset :
Returns a copy of the workset of the selected model but with a new model number.
model --- the number of the model to copy
- save(...)
- save( (Project)self) -> bool :
Saves the project. Returns True if successful.
- set_active_model(...)
- set_active_model( (Project)self, (int)model [, (bool)modelgroup=False]) -> None :
Sets the active model.
If modelgroup is true, the active model will be set to the mother of the indicated model if it has one,
i.e. the whole model group that that the model belongs to will be active.
If the model number is negative (i.e. a mother model), the whole group will also be selected.
- set_hierarchical_base_type(...)
- set_hierarchical_base_type( (Project)self, (int)model, (object)hierarchical_types) -> None :
Export vectors from the model to use in new models.
If vectors are already exported, the old ones and all dependent models will be deleted.
types not not available for the selected model type will be ignored (ex y_orthogonal_res for a PCA model)
model --- The number of the model
hierarchical_types --- A list of types to export from the model see hierarchical_types
- set_predictionset(...)
- set_predictionset( (Project)self, (str)name) -> None :
Set the predictionset that should be used.
name --- Name of the predictionset to use
- transpose_dataset(...)
- transpose_dataset( (Project)self, (object)dataset) -> None :
Transpose a dataset and all models depending on the dataset will be removed.
dataset --- The dataset to transpose
This can be dataset number or name
Data descriptors defined here:
- is_batch_project
- True if the project is a batch project, otherwise False
Closed projects should not be used.
- is_open
- True if the project is open (still valid), otherwise False
Closed projects should not be used.
- num_datasets
- The number of datasets
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class ProjectData(Boost.Python.instance) |
|
Matrix of data from a project.
Each ProjectData has a matrix containing series of values, series names and
IDs that identifies the values in each series.
Series names are just arbitrary strings that describes each series but value IDs are used
to match values from different series with each other and can be variable or observation names
or arbitrary (but unique) strings. |
|
- Method resolution order:
- ProjectData
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- __init__( (object)self, (object)mat, (ValueIDs)value_ids, (object)series_names, (Project)project) -> None :
Constructs a ProjectData from a matrix, column ids and series names.
mat --- a row major matrix containing the data.
value_ids --- identifies the values (columns) in the data.
series_names --- the name of each series (row) in the data.
project --- The project associated with the data.
- __reduce__ = (...)
- get_value_ids(...)
- get_value_ids( (ProjectData)self) -> ValueIDs :
Returns the ValueIDs that identifies the values in each series.
- matrix(...)
- matrix( (ProjectData)self) -> UmMat :
Returns a matrix with all values in the project data.
- series_names(...)
- series_names( (ProjectData)self [, (int)alias=1]) -> list :
Returns the series names for the matrix of this project data.
alias --- the index of the desired alias (default 1 for primary name)
- size_series_aliases(...)
- size_series_aliases( (ProjectData)self) -> int :
Returns number of row alias names.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class ProjectDataBuilder(Boost.Python.instance) |
|
Creates project data from model or predictions. |
|
- Method resolution order:
- ProjectDataBuilder
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- contribution(...)
- contribution( (ProjectDataBuilder)self, (ContributionType)type, (int)model, (object)reference=None, (object)group, (object)weight [, (object)comp=None [, (bool)predicted=False [, (str)variable='']]]) -> ProjectData :
Creates a contribution project data object.
type --- Type of contribution to calculate
model --- Model number
reference --- Reference group of observations. Or reference list/tuple of ([string]batch, [double]maturity, [string]phase iteration(if the project has phase iterations) for batch projects. Empty batch means average observation at that maturity.
group --- Comparison group of observations. Or reference list/tuple of ([string]batch, [double]maturity, [string]phase iteration(if the project has phase iterations) for batch projects. Need at least one observation or (batch, maturity, phase iteration) pair.
weight --- A list of weight contributions. At least one weight is necessary.
comp --- A list of components matching weights list when loadings are used.
predicted --- If the calculations are done on predicted observations.
variable --- A Y variable
- create(...)
- create( (ProjectDataBuilder)self, (str)name [, (int)model=0 [, (int)dataset=0 [, (object)comp=None [, (int)selcv=0 [, (str)variable='' [, (str)variable2='' [, (str)observation='' [, (str)batch='' [, (str)phase='' [, (str)identifier='' [, (bool)transformed=False [, (bool)scaled=False [, (bool)aligned=False]]]]]]]]]]]]]) -> ProjectData :
Creates a project data object.
name --- Name of the vector or matrix data to calculate
model --- Model number
dataset --- Dataset number. Only dataset vectors will be searched if this argument is set
comp --- Component argument for score or loading
selcv --- Cross validation round in scores
variable --- Variable to display
observation --- Observation to display
batch --- Batch to display
phase --- Phase to display
identifier --- Secondary observation or variable identifier to display
transformed --- If variable should be transformed or back transformed
scaled --- If variable should be scaled or unscaled
aligned --- If vector should be aligned or unaligned
Available vectors:
BatchVIP: The Batch Variable Importance plot (Batch VIP) is
available for batch level projects. It displays the
overall importance of the variable on the final quality
of the batch. With phases, the plot displays the
importance of a variable by phase. With a PLS model,
the Batch VIP displays one plot for each y-variable
with a column per selected phase.
Note: The Batch VIP is only available when the scores
are selected as evolution variables for the primary
dataset of the batch level project.
Arguments: model=<int>, comp=<predictive comp>, phase=<phase name>
BatchVIP: The Batch Variable Importance plot (Batch VIP) is
available for batch level projects. It displays the
overall importance of the variable on the final quality
of the batch. With phases, the plot displays the
importance of a variable by phase. With a PLS model,
the Batch VIP displays one plot for each y-variable
with a column per selected phase.
Note: The Batch VIP is only available when the scores
are selected as evolution variables for the primary
dataset of the batch level project.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
c: For every dimension in the PLS model there is a c
vector. It contains the Y loading weights used to
linearly combine the Y's to form the Y score vector u.
This means the c vector actually expresses the
correlation between the Y's and the X score vector t.
Arguments: model=<int>, comp=<predictive comp>
c(corr): Y loading weight c scaled as a correlation coefficient
between Y and u.
Arguments: model=<int>, comp=<predictive comp>
ccv: Y loading weight c for a selected model dimension,
computed from the selected cross-validation round.
Arguments: model=<int>, comp=<predictive comp>, selcv=<int>
ccvSE: Jack-knife standard error of the Y loading weight c
computed from the rounds of cross-validation.
Arguments: model=<int>, comp=<predictive comp>
co: Orthogonal Y loading weights co combine the Y
variables (first dimension) or the Y residuals
(subsequent dimensions) to form the scores Uo.
These orthogonal Y loading weights are selected so as
to minimize the correlation between Uo and T,
thereby indirectly between Uo and X.
Arguments: model=<int>, comp=<yorth comp>
cocv: Orthogonal Y loading weights co from the Y-part of
the model, for a selected model dimension,
computed from the selected cross-validation round.
Arguments: model=<int>, comp=<yorth comp>, selcv=<int>
cocvSE: Jack-knife standard error of the orthogonal Y loading
weights co, computed from the cross-validation
procedure.
Arguments: model=<int>, comp=<yorth comp>
Coeff: PLS regression coefficients
corresponding to the unscaled and uncentered X and Y.
This vector is cumulative over all components up to
the selected one.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
Coeff: OPLS/O2PLS regression coefficients
corresponding to the unscaled and uncentered X and Y.This vector is cumulative over all components.
Arguments: model=<int>, variable=<y variable name>
CoeffC: PLS regression coefficients corresponding to
the unscaled but centered X and unscaled Y. This
vector is cumulative over all components up to the
selected one.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
CoeffC: OPLS regression coefficients corresponding to
the unscaled but centered X and unscaled Y. This
vector is cumulative over all components.
Arguments: model=<int>, variable=<y variable name>
CoeffCS: PLS regression coefficients corresponding to
centered and scaled X, and scaled (but uncentered) Y.
This vector is cumulative over all components up to
the selected one.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
CoeffCS: OPLS regression coefficients corresponding to
centered and scaled X, and scaled (but uncentered) Y.
This vector is cumulative over all components.
Arguments: model=<int>, variable=<y variable name>
CoeffCScv: PLS regression coefficients corresponding to
the centered and scaled X and the scaled (but
uncentered) Y computed from selected the cross-
validation round.
Arguments: model=<int>, comp=<predictive comp>, selcv=<int>, variable=<y variable name>
CoeffCScv: OPLS regression coefficients corresponding to
the centered and scaled X and the scaled (but
uncentered) Y computed from selected the cross-
validation round.
Arguments: model=<int>, selcv=<int>, variable=<y variable name>
CoeffCScvSE(PLS): Jack-knife standard error of the coefficients CoeffCS
computed from all rounds of cross-validation.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
CoeffCScvSE(OPLS): Jack-knife standard error of the coefficients CoeffCS
computed from all rounds of cross-validation.
Arguments: model=<int>, variable=<y variable name>
CoeffCScvSELag: Jack-knife standard error on the
coefficients as a function of lag, computed from
all cross-validation rounds.
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>, variable2=<y variable name>
CoeffCSLag: Coefficients (for scaled and centered data) of a
lagged variable x, for a selected Y as a function of
lags.
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>, variable2=<y variable name>
CoeffMLR: PLS regression coefficients corresponding to
the scaled and centered X but unscaled and
uncentered Y. This vector is cumulative over all
components up to the selected one.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
CoeffMLR: OPLS regression coefficients corresponding to
the scaled and centered X but unscaled and
uncentered Y. This vector is cumulative over all
components.
Arguments: model=<int>, variable=<y variable name>
CoeffRot: Rotated PLS regression coefficients
corresponding to the unscaled and uncentered X and
Y. This vector is cumulative over all components up to
the selected one.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
CoeffRot: Rotated OPLS regression coefficients
corresponding to the unscaled and uncentered X and
Y. This vector is cumulative over all components.
Arguments: model=<int>, variable=<y variable name>
CVGroups: The cross-validation group that each observation is
assigned to.
Arguments: model=<int>
Date/Time: The default date or time variable
Arguments: model=<int>
Date/Time-PS: The default date or time variable
Arguments: model=<int>
DModX Aligned: Distance to the model in X space (row residual SD),
after A components (the selected model dimension),
for the observations used to fit the model. If you
select component 0, it is the standard deviation of
the observations with scaling and centering as
specified in the workset , i.e., it is the distance to the
origin of the scaled coordinate system.
(A) = Absolute distance.
(N) = Normalized distance.
(M) = Mpow weighted residuals.
Arguments: model=<int>, comp=<predictive comp>, batch=<batch name>, aligned=True
DModX: Distance to the model in X space (row residual SD),
after all components (the selected model dimension),
for the observations used to fit the model.
(A) = Absolute distance.
(N) = Normalized distance.
(M) = Mpow weighted residuals.
Arguments: model=<int>, comp=<predictive comp>
DModX Aligned: Distance to the model in X space (row residual SD),
after all components,
for the observations used to fit the model.
(A) = Absolute distance.
(N) = Normalized distance.
(M) = Mpow weighted residuals.
Arguments: model=<int>, batch=<batch name>, aligned=True
DModX: Distance to the model in X space (row residual SD),
after all components.
(A) = Absolute distance.
(N) = Normalized distance.
(M) = Mpow weighted residuals.
Arguments: model=<int>
DModXAlignedOOCSum: For a plot of DModX vs. Maturity
for a batch in the workset the out of control (OOC) sum
is the area outside the control limits expressed as a
percentage of the total area. The summation is made relative
to a DModX vector aligned to median length.
Arguments: model=<int>, comp=<predictive comp>
DModXPS Aligned: Distance to the model in the X space (row residual
SD), after A components (the selected dimension),
for new observations in the predictionset. Displaying
component 0, it is the standard deviation of the
observations with scaling as specified in the workset
times w (correction factor for possible workset
observations, see statistical appendix), i.e., it is the
distance to the origin of the scaled coordinate
system.
Arguments: model=<int>, comp=<predictive comp>, batch=<batch name>, aligned=True
DModXPS: Distance to the model in the X space (row residual
SD), after A components (the selected dimension),
for new observations in the predictionset. Displaying
component 0, it is the standard deviation of the
observations with scaling as specified in the workset
times w (correction factor for possible workset
observations, see statistical appendix), i.e., it is the
distance to the origin of the scaled coordinate
system.
Arguments: model=<int>, comp=<predictive comp>
DModXPS Aligned: Distance to the model in the X space (row residual
SD), after all components ,
for new observations in the predictionset.
Arguments: model=<int>, batch=<batch name>, aligned=True
DModXPS: Distance to the model in the X space (row residual
SD), after A components (the selected dimension),
for new observations in the predictionset. Displaying
component 0, it is the standard deviation of the
observations with scaling as specified in the workset
times w (correction factor for possible workset
observations, see statistical appendix), i.e., it is the
distance to the origin of the scaled coordinate
system.
Arguments: model=<int>
DModXPS+: Combination of DModXPS and Hotelling's T2 when
the latter is outside the critical limit for observations
in the predictionset.
Arguments: model=<int>, comp=<predictive comp>
DModXPS+ (OPLS): Combination of DModXPS and Hotelling's T2 when
the latter is outside the critical limit for observations
in the predictionset.
Arguments: model=<int>
DModXPSAlignedOOCSum: For a plot of DModX vs. Maturity for
a batch in the predictionset the out of control (OOC) sum is
the area outside the control limits expressed as a percentage
of the total area.
Arguments: model=<int>, comp=<predictive comp>
DModY: Distance to the model in the Y space (row residual SD)
after A components (the selected model dimension)
for the observations used to fit the model. If you
select component 0, it is the standard deviation of
the observations with scaling and centering as
specified in the workset.
Arguments: model=<int>, comp=<predictive comp>
DModY: Distance to the model in the Y space (row residual SD)
after all components,
for the observations used to fit the model.
Arguments: model=<int>
DModYPS: Distance to the model in the Y space (row residual SD)
after A components (the selected model dimension)
for observations in the predictionset. If you select
component 0, it is the standard deviation of the
observations with scaling and centering as specified
in the workset.
Arguments: model=<int>, comp=<predictive comp>
DModYPS: Distance to the model in the Y space (row residual SD)
after all components,
for observations in the predictionset.
Arguments: model=<int>
Eig: Eigenvalues of the X matrix.
Arguments: model=<int>
Iter: Number of iterations of the algorithm till
convergence.
Arguments: model=<int>
LagID:
Arguments: model=<int>, variable=<variable name>
LocalCentering:
Arguments: model=<int>
LocalCenteringPS:
Arguments: model=<int>
MPowX: The modeling power of variable X is the fraction of its
standard deviation explained by the model after the
specified component.
Arguments: model=<int>, comp=<predictive comp>
ObsDS: Observation in the primary or secondary dataset
(selected in the Data box) in original units.
Arguments: dataset=<int>, observation=<observation name>
ObsID:
Arguments: identifier=<secondary id name>
ObsID:
Arguments: dataset=<int>, identifier=<secondary id name>
ObsID: Numerical observation identifiers, primary,
secondary, batch, or phase.
Arguments: model=<int>, identifier=<secondary id name>
ObsPS: Observation in the current predictionset, in original
units. There is only one defined predictionset at a time.
Arguments: observation=<observation name>
OLevX: The leverage is a measure of the influence of a point
(observation) on the PC model or the PLS model in
the X space.
The observations leverages are computed as the
diagonal elements of the matrix H0 after A
dimensions.
H0 = T[T'T]-1T'.
Arguments: model=<int>, comp=<predictive comp>
OLevY: The leverage is a measure of the influence of a point
(observation) on the PLS model in the Y space.
The observations leverages are computed as the
diagonal elements of the matrix Hy after A
dimensions.
Hy= U[U'U]-1U'.
Arguments: model=<int>, comp=<predictive comp>
p: Loadings of the X-part of the model.
With a PCA model, the loadings are the coefficients
with which the X-variables are combined to form the
X scores, t.
The loading, p, for a selected PCA dimension,
represent the importance of the X-variables in that
dimension.
With a PLS model, p expresses the importance of the
variables in approximating X in the selected
component.
Arguments: model=<int>, comp=<predictive comp>
p(corr): X loading p scaled as a correlation coefficient
between X and t.
Arguments: model=<int>, comp=<predictive comp>
pc(corr): X loading p and Y loading weight c scaled as
correlation coefficients between X and t (p) and Y and
u (c), and combined to one vector.
Arguments: model=<int>, comp=<predictive comp>
pccvSE: Jack knife standard error of the combined X loading p
and Y loading weight c computed from all rounds of
cross-validation.
Arguments: model=<int>, comp=<predictive comp>
pcv: X loading p for a selected model dimension,
computed from the selected cross validation round.
Arguments: model=<int>, comp=<predictive comp>, selcv=<int>
pcvSE: Jack knife standard error of the X loading p computed
from all rounds of cross validation.
Arguments: model=<int>, comp=<predictive comp>
pLag: X loading p of a lagged variable X, as a function of lags.
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>
PModX: Probability of belonging to the model in the X space,
for observations used to fit the model. Component 0
corresponds to a point model, i.e., the center of the
coordinate system.
Observations with probability of belonging of less
than 5% are considered to be non-members, i.e., they
are different from the normal observations used to
build the model.
Arguments: model=<int>, comp=<predictive comp>
PModX: Probability of belonging to the model in the X space,
for observations used to fit the model.
Observations with probability of belonging of less
than 5% are considered to be non-members, i.e., they
are different from the normal observations used to
build the model.
Arguments: model=<int>
PModXPS: Probability of belonging to the model in the Y space,
for new observations in the predictionset.
Component 0 corresponds to a point model, i.e., the
center of the coordinate system.
Observations with probability of belonging of less
than 5% are considered to be non-members, i.e., they
are different from the normal observations used to
build the model.
Arguments: model=<int>, comp=<predictive comp>
PModXPS: Probability of belonging to the model in the Y space,
for new observations in the predictionset.
Observations with probability of belonging of less
than 5% are considered to be non-members, i.e., they
are different from the normal observations used to
build the model.
Arguments: model=<int>
PModXPS+: Combination of PModXPS and Hotelling's T2 when the
latter is outside the critical limit for observations in
the predictionset.
Arguments: model=<int>, comp=<predictive comp>
PModXPS+: Combination of PModXPS and Hotelling's T2 when the
latter is outside the critical limit for observations in
the predictionset.
Arguments: model=<int>
PModY: Probability of belonging to the model in the Y space,
for observations used to fit the model. Component 0
corresponds to a point model, i.e., the center of the
coordinate system.
Observations with probability of belonging of less
than 5% are considered to be non-members, i.e., they
are different from the normal observations used to
build the model.
Arguments: model=<int>, comp=<predictive comp>
PModY: Probability of belonging to the model in the Y space,
for observations used to fit the model. Component 0
corresponds to a point model, i.e., the center of the
coordinate system.
Observations with probability of belonging of less
than 5% are considered to be non-members, i.e., they
are different from the normal observations used to
build the model.
Arguments: model=<int>
PModYPS: Probability of belonging to the model in the
Y space, for new observations in the predictionset.
Component 0 corresponds to a point model, i.e., the
center of the coordinate system.
Observations with probability of belonging of less
than 5% are considered to be non-members, i.e., they
are different from the normal observations used to
build the model.
Arguments: model=<int>, comp=<predictive comp>
PModYPS: Probability of belonging to the model in the
Y space, for new observations in the predictionset.
Component 0 corresponds to a point model, i.e., the
center of the coordinate system.
Observations with probability of belonging of less
than 5% are considered to be non-members, i.e., they
are different from the normal observations used to
build the model.
Arguments: model=<int>
po: Orthogonal loading po of the X-part of the OPLS
model. po expresses the unique variability in X not
found in Y, i.e, X variation orthogonal to Y, in the
selected component.
Arguments: model=<int>, comp=<xorth comp>
po(corr): Orthogonal loading po of the X-part of the OPLS
model, scaled as the correlation coefficient between
X and to, in the selected component.
Arguments: model=<int>, comp=<xorth comp>
pocv: Orthogonal loading po of the X-part of the OPLS
model, for a selected model dimension, computed
from the selected cross validation round.
Arguments: model=<int>, comp=<xorth comp>, selcv=<int>
pocvSE: Jack knife standard error of the orthogonal loading po
of the X-part of the OPLS model, computed from all
rounds of cross validation.
Arguments: model=<int>, comp=<xorth comp>
poso(corr): Loadings of the Y-part of the model (po) and the projection of To on Y (so)
concatenated to one vector, scaled as the correlation coefficient between
X and to, in the selected component.
Arguments: model=<int>, comp=<xorth comp>
pq(corr): X loading p and Y loading q scaled as correlation
coefficients between X and t (p) and Y and u (q), and
combined to one vector.
Arguments: model=<int>, comp=<predictive comp>
q: Loadings of the Y-part of the PLS/OPLS model.
q expresses the importance of the variables in
approximating Y variation correlated to X, in the
selected component. Y-variables with large q
(positive or negative) are highly correlated with t
(and X).
Arguments: model=<int>, comp=<predictive comp>
Q2: Fraction of the total variation of the X block (PCA) or
the Y block (PLS) that can be predicted by each
component.
Arguments: model=<int>
Q2(cum)progression: Cumulative Q2 for the extracted
components, showing the progression of cumulative values
for each added orthogonal component in the OPLS model,
e.g. 1+0, 1+1, 1+2.
Arguments: model=<int>
Q2cum: Cumulative Q2 for the extracted components.
Arguments: model=<int>
Q2VX: Predicted fraction, according to cross-validation, of
the variation of the X-variables, for the selected
component of a PCA model.
Arguments: model=<int>, variable=<variable name>
Q2VX: Predicted fraction, according to cross-validation, of
the variation of the X-variables, for the selected
component of a PCA model.
Arguments: model=<int>, comp=<predictive comp>
Q2VXcum: Cumulative predicted fraction, according to cross-
validation, of the variation of the X-variables.
Arguments: model=<int>, variable=<variable name>
Q2VXcum: Cumulative predicted fraction, according to cross-
validation, of the variation of the X-variables.
Arguments: model=<int>, comp=<predictive comp>
Q2VY: Predicted fraction, according to cross-validation, of
the variation of the Y-variables, for the selected
component of a PLS/OPLS model.
Arguments: model=<int>, variable=<y variable name>
Q2VY: Predicted fraction, according to cross-validation, of
the variation of the Y-variables, for the selected
component of a PLS/OPLS model.
Arguments: model=<int>, comp=<predictive comp>
Q2VYcum: Cumulative predicted fraction, according to cross-
validation, of the variation of the Y-variables.
Arguments: model=<int>, variable=<y variable name>
Q2VYcum: Cumulative predicted fraction, according to cross-
validation, of the variation of the Y-variables.
Arguments: model=<int>, comp=<predictive comp>
qcv: Y loading q for a selected model dimension,
computed from the selected cross validation round.
Arguments: model=<int>, comp=<predictive comp>, selcv=<int>
qcvSE: Jack knife standard error of the Y loading q computed
from all rounds of cross validation.
Arguments: model=<int>, comp=<predictive comp>
qo: Orthogonal loading qo of the Y-part of the OPLS model.
qo expresses the unique variability in Y not found in
X, i.e, Y variation orthogonal to X, in the selected
component.
Arguments: model=<int>, comp=<yorth comp>
qocv: Orthogonal loading qo of the Y-part of the OPLS
model, for a selected model dimension, computed
from the selected cross validation round.
Arguments: model=<int>, comp=<yorth comp>, selcv=<int>
qocvSE: Jack knife standard error of the orthogonal loading qo
of the Y-part of the OPLS model, computed from all
rounds of cross-validation.
Arguments: model=<int>, comp=<yorth comp>
r: R is the projection of uo onto X.
R contains non-zero entries when the score matrix Uo
is not completely orthogonal to X. The norm of this
matrix is usually very small but is used to enhance the
predictions of X.
Arguments: model=<int>, comp=<yorth comp>
R2(cum)progression: Cumulative fraction of Y variation,
showing the progression of cumulative values for each
added orthogonal component in the OPLS model,
e.g. 1+0, 1+1, 1+2.
Arguments: model=<int>
R2VX:
Arguments: model=<int>, variable=<variable name>
R2VX: Explained fraction of the variation of the X-variables,
for the selected component.
Arguments: model=<int>, comp=<predictive comp>
R2VXAdj: Explained fraction of the variation of the X-variables,
adjusted for degrees of freedom, for the selected
component.
Arguments: model=<int>, comp=<predictive comp>
R2VXAdjcum: Cumulative explained fraction of the variation of the
X-variables, adjusted for degrees of freedom.
Arguments: model=<int>, variable=<variable name>
R2VXAdjcum: Cumulative explained fraction of the variation of the
X-variables, adjusted for degrees of freedom.
Arguments: model=<int>, comp=<predictive comp>
R2VXcum: Cumulative explained fraction of the variation of the
X-variables.
Arguments: model=<int>, variable=<variable name>
R2VXcum: Cumulative explained fraction of the variation of the
X-variables.
Arguments: model=<int>, comp=<predictive comp>
R2VY: Explained fraction of the variation of the Y-variables,
for the selected component.
Arguments: model=<int>, variable=<y variable name>
R2VY: Explained fraction of the variation of the Y-variables,
for the selected component.
Arguments: model=<int>, comp=<predictive comp>
R2VYAdj: Explained fraction of the variation of the Y-variables,
adjusted for degrees of freedom, for the selected
component.
Arguments: model=<int>, comp=<predictive comp>
R2VYAdjcum: Cumulative explained fraction of the variation of the
Y variables, adjusted for degrees of freedom.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
R2VYAdjcum: Cumulative explained fraction of the variation of the
Y variables, adjusted for degrees of freedom.
Arguments: model=<int>, comp=<predictive comp>
R2VYcum: Cumulative explained fraction of the variation of the
Y variables.
Arguments: model=<int>, variable=<y variable name>
R2VYcum: Cumulative explained fraction of the variation of the
Y variables.
Arguments: model=<int>, comp=<predictive comp>
R2X: Fraction of the total variation of the X block that can
be explained by each component.
Arguments: model=<int>
R2XAdj: Explained fraction of the variation of the X block,
adjusted for degrees of freedom, for the selected
component.
Arguments: model=<int>
R2XAdjcum: Cumulative explained fraction of the variation of the
X block, adjusted for degrees of freedom.
Arguments: model=<int>
R2Xcum: Cumulative explained fraction of the variation of the
X block.
Arguments: model=<int>
R2Y: Fraction of the total variation of the Y block that can
be explained by each component.
Arguments: model=<int>
R2YAdj: Explained fraction of the variation of the Y block,
adjusted for degrees of freedom, for the selected
component.
Arguments: model=<int>
R2YAdjcum: Cumulative explained fraction of the variation of the
Y block, adjusted for degrees of freedom.
Arguments: model=<int>
R2Ycum: Cumulative explained fraction of the variation of the
Y block.
Arguments: model=<int>
RMSEcv:
Arguments: model=<int>, variable=<y variable name>
RMSEcv: Root Mean Square Error, computed from the selected
cross validation round.
Arguments: model=<int>, comp=<predictive comp>
RMSEcv:
Arguments: model=<int>
RMSEcv-progression: Root Mean Square Error showing the
progression of RMSEcv values for each added orthogonal
component in the OPLS model, e.g. 1+0, 1+1, 1+2.
Arguments: model=<int>, variable=<y variable name>
RMSEE: Root Mean Square Error of the Estimation (the fit) for
observations in the workset.
Arguments: model=<int>, comp=<predictive comp>
RMSEE: Root Mean Square Error of the Estimation (the fit) for
observations in the workset.
Arguments: model=<int>
RMSEP: Root Mean Square Error of the Prediction for
observations in the predictionset.
Arguments: model=<int>, comp=<predictive comp>
RMSEP: Root Mean Square Error of the Prediction for
observations in the predictionset.
Arguments: model=<int>, variable=<y variable name>
RMSEP: Root Mean Square Error of the Prediction for
observations in the predictionset.
Arguments: model=<int>
S2VX: Residual variance of the X-variables, after the
selected component, scaled as specified in the
workset.
Arguments: model=<int>, comp=<predictive comp>
S2VY: Residual variance of the Y-variables, after the
selected component, scaled as specified in the
workset.
Arguments: model=<int>, comp=<predictive comp>
S2X: Variance of the X block. For component number A,
it is the residual variance of X after component A.
Arguments: model=<int>
S2Y: Variance of the Y block. For component number A,
it is the residual variance of Y after component A.
Arguments: model=<int>
SDt: Standard deviation of the X scores, T.
Arguments: model=<int>
SDu: Standard deviation of the Y scores, U.
Arguments: model=<int>
SerrL: Lower limit of the standard error of the predicted
response Y for an observation in the workset.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
SerrL: Lower limit of the standard error of the predicted
response Y for an observation in the workset.
Arguments: model=<int>, variable=<y variable name>
SerrLPS: Lower limit of the standard error of the predicted
response Y for a new observation in the predictionset.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
SerrLPS: Lower limit of the standard error of the predicted
response Y for a new observation in the predictionset.
Arguments: model=<int>, variable=<y variable name>
SerrU: Upper limit of the standard error of the predicted
response Y for an observation in the workset.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
SerrU: Upper limit of the standard error of the predicted
response Y for an observation in the workset.
Arguments: model=<int>, variable=<y variable name>
SerrUPS: Upper limit of the standard error of the predicted
response Y for a new observation in the predictionset.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
SerrUPS: Upper limit of the standard error of the predicted
response Y for a new observation in the predictionset.
Arguments: model=<int>, variable=<y variable name>
so: The projection of To on Y.
So contains non-zero entries when the score matrix To is not completely orthogonal to Y.
The norm of this matrix is usually very small but is used to enhance the predictions of Y.
Arguments: model=<int>, comp=<xorth comp>
SSX: Sum of squares of the X block. For component
number A, it is the X residual Sum of Squares after
component A.
Arguments: model=<int>
SSY: Sum of squares of the Y block. For component
number A, it is the Y residual Sum of Squares after
component A.
Arguments: model=<int>
t: Scores t, one vector for each model dimension, are
new variables computed as linear combinations of X.
They provide a summary of X that best approximates
the variation of X only (PC model), and both
approximate X and predict Y (PLS model).
Arguments: model=<int>, comp=<predictive comp>, batch=<batch name>, aligned=True
t: Scores t, one vector for each model dimension, are
new variables computed as linear combinations of X.
They provide a summary of X that best approximates
the variation of X only (PC model), and both
approximate X and predict Y (PLS model).
Arguments: model=<int>, comp=<predictive comp>
T2Range: Hotelling's T2 for the selected range of components.
It is a distance measure of how far away an
observation is from the center of a PCA or PLS/OPLS
model hyperplane.
Arguments: model=<int>, comp=(<comp1>, <comp2>), batch=<batch name>, aligned=True
T2Range: Hotelling's T2 for the selected range of components.
It is a distance measure of how far away an
observation is from the center of a PCA or PLS/OPLS
model hyperplane.
Arguments: model=<int>, comp=(<comp1>, <comp2>)
T2Range: Hotelling's T2 for the selected range of components.
It is a distance measure of how far away an
observation is from the center of a PCA or PLS/OPLS
model hyperplane.
Arguments: model=<int>, batch=<batch name>, aligned=True
T2Range: Hotelling's T2 for the selected range of components.
It is a distance measure of how far away an
observation is from the center of a PCA or PLS/OPLS
model hyperplane.
Arguments: model=<int>
T2RangeAlignedOOCSum: For a plot of T2Range vs. Maturity for
a batch in the workset the out of control (OOC) sum is the area
outside the control limits expressed as a percentage of the
total area. The summation is made relative to a T2Range vector
aligned to median length.
Arguments: model=<int>, comp=(<comp1>, <comp2>)
T2RangePS: Predicted Hotelling's T2 for the selected range of
components.
Arguments: model=<int>, comp=(<comp1>, <comp2>), batch=<batch name>, aligned=True
T2RangePS: Predicted Hotelling's T2 for the selected range of
components.
Arguments: model=<int>, comp=(<comp1>, <comp2>)
T2RangePS: Predicted Hotelling's T2 for the selected range of
components.
Arguments: model=<int>, batch=<batch name>, aligned=True
T2RangePS: Predicted Hotelling's T2 for the selected range of
components.
Arguments: model=<int>
T2RangePSAlignedOOCSum: For a plot of T2Range vs. Maturity
for a batch in the predictionset the out of control (OOC)
sum is the area outside the control limits expressed as a
percentage of the total area. The summation is made relative
to a T2RangePS vector aligned to median length.
Arguments: model=<int>, comp=(<comp1>, <comp2>)
TAlignedOOCSum: For a plot of scores (t) vs. Maturity for
a batch in the workset the out of control (OOC) sum is the
area outside the control limits expressed as a percentage of
the total area. The summation is made relative to a t vector
aligned to median length.
Arguments: model=<int>, comp=<predictive comp>
tcv: X score t for the selected model dimension,
computed from the selected cross validation round.
Arguments: model=<int>, comp=<predictive comp>, selcv=<int>
tcv: X score t for the selected model dimension,
computed from the selected cross validation round.
Arguments: model=<int>, comp=<predictive comp>
tcvSE: Jack knife standard error of the X score t computed
from all rounds of cross validation.
Arguments: model=<int>, comp=<predictive comp>
Time/Maturity: Time or Maturity variable determining the end point
of a Batch/Phase and used as Y in the observation
level models. This variable is used to align
Batch/Phase to the median length.
Arguments: model=<int>, aligned=True
to: Orthogonal X score to of the X-part of the OPLS
model, for the selected component. It summarises
the unique X variation, i.e., the X variation orthogonal
to Y.
Arguments: model=<int>, comp=<xorth comp>, batch=<batch name>, aligned=True
to: Orthogonal X score to of the X-part of the OPLS
model, for the selected component. It summarises
the unique X variation, i.e., the X variation orthogonal
to Y.
Arguments: model=<int>, comp=<xorth comp>
TOAlignedOOCSum: For a plot of orthogonal scores (to) vs. Maturity for
a batch in the workset the out of control (OOC) sum is the
area outside the control limits expressed as a percentage of
the total area. The summation is made relative to a to vector
aligned to median length.
Arguments: model=<int>, comp=<xorth comp>
tocv: Orthogonal X score to from the X-part of the OPLS
model, for a selected model dimension, computed
from the selected cross-validation round.
Arguments: model=<int>, comp=<xorth comp>
tocvSE: Jack knife standard error of the orthogonal X score to
from the X-part of the OPLS model, computed from
all rounds of cross validation.
Arguments: model=<int>, comp=<xorth comp>
toPS: Predicted orthogonal X score to of the X-part of the
OPLS model, for the observations in the predictionset.
Arguments: model=<int>, comp=<xorth comp>, batch=<batch name>, aligned=True
toPS: Predicted orthogonal X score to of the X-part of the
OPLS model, for the observations in the predictionset.
Arguments: model=<int>, comp=<xorth comp>
TOPSAlignedOOCSum: For a plot of orthogonal scores (to) vs. Maturity
for a batch in the predictionset the out of control (OOC)
sum is the area outside the control limits expressed as a
percentage of the total area. The summation is made relative
to a toPS vector aligned to median length.
Arguments: model=<int>, comp=<xorth comp>
tPS: Predicted X score t, for the selected model
dimension, for the observations in the predictionset.
Arguments: model=<int>, comp=<predictive comp>, batch=<batch name>, aligned=True
tPS: Predicted X score t, for the selected model
dimension, for the observations in the predictionset.
Arguments: model=<int>, comp=<predictive comp>
TPSAlignedOOCSum: For a plot of scores (t) vs. Maturity
for a batch in the predictionset the out of control (OOC)
sum is the area outside the control limits expressed as a
percentage of the total area. The summation is made relative
to a tPS vector aligned to median length.
Arguments: model=<int>, comp=<predictive comp>
tPScv: Predicted X score t for the selected model dimension,
computed from the selected cross validation round.
Arguments: model=<int>, comp=<predictive comp>, selcv=<int>
tPScvSE: Jack knife standard error of the X score t, for the
observations in the predictionset, computed from all
rounds of cross validation.
Arguments: model=<int>, comp=<predictive comp>
u: Scores u, one vector for each model dimension, are
new variables summarizing Y so as to maximize the
correlation with the X scores t.
Arguments: model=<int>, comp=<predictive comp>
ucv: Y score u for the selected model dimension,
computed from the selected cross validation round.
Arguments: model=<int>, comp=<predictive comp>
uo: Orthogonal Y score uo of the Y-part of the OPLS
model, for the selected component. It summarises
the unique Y variation, i.e., the Y variation orthogonal
to X.
Arguments: model=<int>, comp=<yorth comp>
uocv: Orthogonal Y score uo of the Y-part of the OPLS model,
computed from the selected cross validation round.
Arguments: model=<int>, comp=<yorth comp>
VarDS: Variable from the selected dataset, in original units.
Available after selecting a DS in the Data box.
Arguments: dataset=<int>, variable=<variable name>
VarID:
Arguments: model=<int>, identifier=<secondary id name>
VarID:
Arguments: identifier=<secondary id name>
VarID: Variable Identifier.
Arguments: dataset=<int>, identifier=<secondary id name>
VarPS: Variable (X or Y), from the current predictionset, in
original units. Available after selecting PS in the Data box.
Arguments: variable=<variable name>
VIP: (Variable Importance for the Projection) summarizes the importance
of the variables both to explain X and to correlate to Y.
Terms with VIP>1 have an above average influence on the model.
Arguments: model=<int>, comp=<predictive comp>
VIP: (Variable Importance for the Projection) summarizes the importance
of the variables both to explain X and to correlate to Y.
Terms with VIP>1 have an above average influence on the model.
Arguments: model=<int>
VIPcv: VIP computed from the selected cross validation round.
Arguments: model=<int>, comp=<predictive comp>, selcv=<int>
VIPcv: VIP computed from the selected cross validation round.
Arguments: model=<int>, selcv=<int>
VIPcvSE: Jack knife standard error of the VIP computed from all
rounds of cross validation.
Arguments: model=<int>, comp=<predictive comp>
VIPcvSE: Jack knife standard error of the VIP computed from all
rounds of cross validation.
Arguments: model=<int>
VIPLag: VIP of a lagged variable X as a function of lags.
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>
VIPLag:
Arguments: model=<int>, variable=<variable name>
VIPorth: (Orthogonal Variable Importance for the Projection) summarizes the importance
of the variables explaining the part of X orthogonal to Y.
Terms with VIP>1 have an above average influence on the model.
Arguments: model=<int>
VIPorthLag:
Arguments: model=<int>, variable=<variable name>
VIPpred: (Predictive Variable Importance for the Projection) summarizes the importance
of the variables explaining the part of X related to Y.
Terms with VIP>1 have an above average influence on the model.
Arguments: model=<int>
VIPpredLag:
Arguments: model=<int>, variable=<variable name>
w: X loading weight that combine the X-variables (first
dimension) or the X residuals (subsequent
dimensions) to form the scores t. This loading weight
is selected so as to maximize the correlation between
t and u, thereby indirectly between t and Y.
X-variables with large w's (positive or negative) are
highly correlated with u (and Y).
Arguments: model=<int>, comp=<predictive comp>
w*: X loading weight that combines the original X
variables (not their residuals in contrast to w) to form
the scores t.
In the first dimension w* is equal to w.
w* is related to the correlation between the X
variables and the Y scores u.
W* = W(P'W)-1
X-variables with large w* (positive or negative) are
highly correlated with u (and Y).
Arguments: model=<int>, comp=<predictive comp>
w*cv: X loading weight w*, for a selected model dimension,
computed from the selected cross validation round.
Arguments: model=<int>, comp=<predictive comp>, selcv=<int>
w*cvSE: Jack knife standard error of the X loading weight w*
computed from all rounds of cross validation.
Arguments: model=<int>, comp=<predictive comp>
w*Lag: X loading weight w* of a lagged variable as a function of lags
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>
wcv: X loading weight w, for a selected model dimension,
computed from the selected cross validation round.
Arguments: model=<int>, comp=<predictive comp>, selcv=<int>
wcvSE: Jack knife standard error of the X loading weight w
computed from all rounds of cross validation.
Arguments: model=<int>, comp=<predictive comp>
wLag: X loading weight w of a lagged variable as a function of lags
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>
wo: Orthogonal loading weight wo of the X-part of the
OPLS model. It combines the X residuals (subsequent
dimensions) to form the orthogonal X score to. This
loading weight is selected so as to minimize the
correlation between to and u, thereby indirectly
between to and Y.
Arguments: model=<int>, comp=<xorth comp>
wocv: Orthogonal loading weight wo of the X-part of the
OPLS model, for a selected model dimension,
computed from the selected cross validation round.
Arguments: model=<int>, comp=<xorth comp>, selcv=<int>
wocvSE: Jack knife standard error of the orthogonal loading
weight wo of the X-part of the OPLS model,
computed from all rounds of cross validation.
Arguments: model=<int>, comp=<xorth comp>
Xavg: Averages of X-variables, in original units. If the
variable is transformed, the average is in the
transformed metric.
Arguments: model=<int>
XObs: X-variables for the selected observation in the
workset in original units. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, observation=<observation name>, transformed=<bool>, scaled=<bool>
XObsPred: Reconstructed observations as X=TP' from the
workset. Can be displayed in transformed or scaled
units.
Arguments: model=<int>, comp=<predictive comp>, observation=<observation name>, transformed=<bool>, scaled=<bool>
XObsPred: Reconstructed observations as X=TP' from the
workset. Can be displayed in transformed or scaled
units.
Arguments: model=<int>, observation=<observation name>, transformed=<bool>, scaled=<bool>
XObsPredPS: Reconstructed observations as X=TP' from the
predictionset. Can be displayed in transformed or
scaled units.
Arguments: model=<int>, comp=<predictive comp>, observation=<observation name>, transformed=<bool>, scaled=<bool>
XObsPredPS: Reconstructed observations as X=TP' from the
predictionset. Can be displayed in transformed or
scaled units.
Arguments: model=<int>, observation=<observation name>, transformed=<bool>, scaled=<bool>
XObsRes: Residuals of observations (X space) in the workset, in
original units. Can be displayed in transformed or
scaled units.
Arguments: model=<int>, comp=<predictive comp>, observation=<observation name>, transformed=<bool>, scaled=<bool>
XObsRes: Residuals of observations (X space) in the workset, in
original units. Can be displayed in transformed or
scaled units.
Arguments: model=<int>, observation=<observation name>, transformed=<bool>, scaled=<bool>
XObsResPS: Residuals of observations (X space) in the
predictionset, in original units. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, observation=<observation name>, transformed=<bool>, scaled=<bool>
XObsResPS: Residuals of observations (X space) in the
predictionset, in original units. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, observation=<observation name>, transformed=<bool>, scaled=<bool>
XVar: X-variable from the workset. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, variable=<variable name>, batch=<batch name>, transformed=<bool>, scaled=<bool>, aligned=True
XVar: X-variable from the workset. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarAlignedOOCSum: For a plot of an X-variable vs. Maturity
for a batch in the workset the out of control (OOC) sum is the
area outside the control limits expressed as a percentage of the
total area. The summation is made relative to an XVar vector
aligned to median length.
Arguments: model=<int>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarPred: A reconstructed variable from the workset. For PLS
and PCA models, an X-variable from the workset is
reconstructed as X=TP'. For OPLS models XVarPred
represents the X-values predicted from the given Y-
values.
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarPred: A reconstructed variable from the workset. For PLS
and PCA models, an X-variable from the workset is
reconstructed as X=TP'. For OPLS models XVarPred
represents the X-values predicted from the given Y-
values.
Arguments: model=<int>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarPredPS: A reconstructed variable from the predictionset. For
PLS and PCA models, an X-variable from the
predictionset is reconstructed as X=TPS * P'. For OPLS
models XVarPredPS represents the X-values
predicted from the given Y-values.
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarPredPS: A reconstructed variable from the predictionset. For
PLS and PCA models, an X-variable from the
predictionset is reconstructed as X=TPS * P'. For OPLS
models XVarPredPS represents the X-values
predicted from the given Y-values.
Arguments: model=<int>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarPS: X-variable from the predictionset. Can be displayed
in transformed or scaled units.
Arguments: model=<int>, variable=<variable name>, batch=<batch name>, transformed=<bool>, scaled=<bool>, aligned=True
XVarPS: X-variable from the predictionset. Can be displayed
in transformed or scaled units.
Arguments: model=<int>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarPS(coded): Variables from the predictionset.
Unlike XVarPS, qualitative variables are listed with there original
settings instead of expanded into one column per setting
Arguments: model=<int>
XVarPSAlignedOOCSum: For a plot of an X-variable vs. Maturity
for a batch in the predictionset the out of control (OOC) sum
s the area outside the control limits expressed as a percentage
of the total area. The summation is made relative to an XVarPS
vector aligned to median length.
Arguments: model=<int>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarRes: X-variable residuals for observations in the workset,
in original units. Can be displayed in transformed or
scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarRes: X-variable residuals for observations in the workset,
in original units. Can be displayed in transformed or
scaled units.
Arguments: model=<int>, comp=(<predictive comp>, <xorth comp>, <yorth comp>), variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarResPS: X-variable residuals for observations in the
predictionset, in original units. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarResPS: X-variable residuals for observations in the
predictionset, in original units. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVaResPSST: X-variable residuals for observations in the
predictionset, in standardized units (divided by the
residual standard deviation). Can be displayed in
transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVaResPSST: X-variable residuals for observations in the
predictionset, in standardized units (divided by the
residual standard deviation). Can be displayed in
transformed or scaled units.
Arguments: model=<int>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVaResST: X-variable residuals for observations in the workset,
in standardized units (divided by the residual
standard deviation). Can be displayed in transformed
or scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVaResST: X-variable residuals for observations in the workset,
in standardized units (divided by the residual
standard deviation). Can be displayed in transformed
or scaled units.
Arguments: model=<int>, variable=<variable name>, transformed=<bool>, scaled=<bool>
XVarResYRelated: X-variable residuals where the systematic variation
orthogonal to Y has been removed.
Arguments: model=<int>, variable=<variable name>, transformed=<bool>, scaled=<bool>
Xws: Scaling weights of the X-variables.
Arguments: model=<int>
Yavg: Averages of Y-variables, in original units. If the
variable is transformed, the average is in the
transformed metric.
Arguments: model=<int>
YObs: Y-variables for the selected observation in the
workset in original units. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, observation=<observation name>, transformed=<bool>, scaled=<bool>
YObsRes: Residuals of observations (Y space) in the workset, in
original units. Can be displayed in transformed or
scaled units.
Arguments: model=<int>, comp=<predictive comp>, observation=<observation name>, transformed=<bool>, scaled=<bool>
YObsResPS: Residuals of observations (Y space) in the
predictionset, in original units. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, observation=<observation name>, transformed=<bool>, scaled=<bool>
YObsResPS: Residuals of observations (Y space) in the
predictionset, in original units. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, observation=<observation name>, transformed=<bool>, scaled=<bool>
YPred: Predicted values of Y-variables for observations in the
workset, in original units, i.e., back-transformed
when transformations are present. Can be displayed
in transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, batch=<batch name>, transformed=<bool>, scaled=<bool>, aligned=True
YPred: Predicted values of Y-variables for observations in the
workset, in original units, i.e., back-transformed
when transformations are present. Can be displayed
in transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPred: Predicted values of Y-variables for observations in the
workset, in original units, i.e., back-transformed
when transformations are present. Can be displayed
in transformed or scaled units.
Arguments: model=<int>, variable=<y variable name>, batch=<batch name>, transformed=<bool>, scaled=<bool>, aligned=True
YPred: Predicted values of Y-variables for observations in the
workset, in original units, i.e., back-transformed
when transformations are present. Can be displayed
in transformed or scaled units.
Arguments: model=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredAlignedOOCSum: For a plot of YPred vs. Maturity for a batch
in the workset the out of control (OOC) sum is the area outside
the control limits expressed as a percentage of the total area.
The summation is made relative to an YPred vector aligned to median length.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredcv:
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
YPredcv:
Arguments: model=<int>, variable=<y variable name>
YPredErrcv: Prediction error of the fitted Ys for observations in
the workset, computed from the cross validation
procedure.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
YPredErrcv: Prediction error of the fitted Ys for observations in
the workset, computed from the cross validation
procedure.
Arguments: model=<int>, variable=<y variable name>
YPredErrcvSE: Jack knife standard error of the prediction error of the
fitted Ys for observations in the workset, computed
from the cross validation rounds.
Arguments: model=<int>, variable=<y variable name>
YPredPS: Predicted values for Y-variables for observations in
the predictionset, in original units, i.e. back
transformed when transformations are present. Can
be displayed in transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, batch=<batch name>, transformed=<bool>, scaled=<bool>, aligned=True
YPredPS: Predicted values for Y-variables for observations in
the predictionset, in original units, i.e. back
transformed when transformations are present. Can
be displayed in transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredPS: Predicted values for Y-variables for observations in
the predictionset, in original units, i.e. back
transformed when transformations are present. Can
be displayed in transformed or scaled units.
Arguments: model=<int>, variable=<y variable name>, batch=<batch name>, transformed=<bool>, scaled=<bool>, aligned=True
YPredPS: Predicted values for Y-variables for observations in
the predictionset, in original units, i.e. back
transformed when transformations are present. Can
be displayed in transformed or scaled units.
Arguments: model=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredPSAlignedOOCSum: For a plot of YPred vs. Maturity for a batch in the
predictionset the out of control (OOC) sum is the area outside the control
limits expressed as a percentage of the total area. The summation is made
relative to an YPredPS vector aligned to median length.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredPSConfInt+: Upper limit for the confidence interval of predicted
Ys from the predictionset. The limit is calculated from
the cross validation and the confidence level
specified in model options.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredPSConfInt+: Upper limit for the confidence interval of predicted
Ys from the predictionset. The limit is calculated from
the cross validation and the confidence level
specified in model options.
Arguments: model=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredPSConfInt-: Lower limit for the confidence interval of predicted
Ys from the predictionset. The limit is calculated from
the cross validation and the confidence level
specified in model options.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredPSConfInt-: Lower limit for the confidence interval of predicted
Ys from the predictionset. The limit is calculated from
the cross validation and the confidence level
specified in model options.
Arguments: model=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredPScv: Predicted values of the modelled Ys for observations
in the predictionset, computed from the cross
validation procedure.
Arguments: model=<int>, comp=<predictive comp>, selcv=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredPScv: Predicted values of the modelled Ys for observations
in the predictionset, computed from the cross
validation procedure.
Arguments: model=<int>, selcv=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YPredPScvSE: Jack knife standard error of the prediction of Y for
observations in the predictionset, computed from all
rounds of cross validation.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>
YPredPScvSE: Jack knife standard error of the prediction of Y for
observations in the predictionset, computed from all
rounds of cross validation.
Arguments: model=<int>, variable=<y variable name>
YRelatedProfile: Displays the estimated pure profiles of the
underlying constituents in X under the assumption of
additive Y-variables.
Estimation includes a linear transformation of the
Coefficient matrix, Bp(BpTBp)-1, where Bp is the
Coefficient matrix using only the predictive
components to compute the Coefficient matrix (i.e.,
the components orthogonal to Y are not included in
the computation of Bp).
Arguments: model=<int>, variable=<y variable name>
YVar: Y-variable from the workset. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, variable=<y variable name>, batch=<batch name>, transformed=<bool>, scaled=<bool>, aligned=True
YVar: Y-variable from the workset. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YVarPS: Y-variable from the predictionset. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, variable=<y variable name>, batch=<batch name>, transformed=<bool>, scaled=<bool>, aligned=True
YVarPS: Y-variable from the predictionset. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YVarRes: Y-variable residuals for observations in the workset,
in original units. Can be displayed in transformed or
scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YVarRes: Y-variable residuals for observations in the workset,
in original units. Can be displayed in transformed or
scaled units.
Arguments: model=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YVarResPS: Y-variable residuals for observations in the
predictionset, in original units. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YVarResPS: Y-variable residuals for observations in the
predictionset, in original units. Can be displayed in
transformed or scaled units.
Arguments: model=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YVaResPSST: Y-variable residuals for observations in the
predictionset, in standardized units (divided by the
residual standard deviation). Can be displayed in
transformed or scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YVaResPSST: Y-variable residuals for observations in the
predictionset, in standardized units (divided by the
residual standard deviation). Can be displayed in
transformed or scaled units.
Arguments: model=<int>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
YVaResST: Y-variable residuals for observations in the workset,
in standardized units (divided by the residual
standard deviation). Can be displayed in transformed
or scaled units.
Arguments: model=<int>, comp=<predictive comp>, variable=<y variable name>, transformed=<bool>, scaled=<bool>
Yws: Scaling weights of the Y-variables.
Arguments: model=<int>
Data and other attributes defined here:
- ContributionType = <class 'umetrics.simca.ContributionType'>
- Different contribution types
scores --- Contributions for scores
dmodx --- Contributions for distance to model X
dmody --- Contributions for distance to model Y
- WeightType = <class 'umetrics.simca.WeightType'>
- Different contribution weight types
normalized --- Normalized contributions
p --- Weighted with P
po --- Weighted with Po.
wstar --- Weighted with W*
rx --- Weighted with RX
ry --- Weighted with RY
coeffcs --- Weighted with CoeffCS
coeffcsraw --- Weighted with CoeffCSRaw
raw --- Weighted with Raw
vip --- Weighted with VIP
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class ProjectHandler(Boost.Python.instance) |
|
Handles creating new projects, opening projects, closing projects.
If python runs embedded, open projects might not be used by the application
until it is activated. See documentation for the application object. |
|
- Method resolution order:
- ProjectHandler
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
Static methods defined here:
- close_all_projects(...)
- close_all_projects() -> bool :
Closes all currently open projects.
Returns False if all project could not be closed, otherwise True.
If the method succeeds, all Project instances will be closed and should not be used.
- close_project(...)
- close_project( (Project)arg1 [, (bool)confirm=True]) -> bool :
Closes the project.
Returns False if the project could not be closed, otherwise True.
If the method succeeds, the supplied project will be closed and should not be used.
If confirm is True, the application (if running embedded) will get confirmation from
the user before closing the project if it has unsaved data.
If it is False any unsaved data will be lost.
- create_project(...)
- create_project( (str)projectfilepath) -> Project :
Create a new empty project.
The new project will use 'projectfilepath' as the new project file.
The project file must not exist.
- open_project(...)
- open_project( (str)projectpath) -> Project :
Opens a project or returns an existing project if it is already open.
- save_project_as(...)
- save_project_as( (Project)project, (str)newprojectpath) -> bool :
Saves the project to a new file.
If the new file already exists, it will be overwritten.
Returns True if successful and False if it fails in which case the project is still open using the old file).
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class ProjectOptions(Boost.Python.instance) |
|
Options specific to a project. |
|
- Method resolution order:
- ProjectOptions
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- __str__(...)
- __str__( (ProjectOptions)self) -> str
- apply(...)
- apply( (ProjectOptions)arg1) -> None :
Applies the current settings.
Data descriptors defined here:
- bring_secondary_ids_to_batch_level
- If True, secondary variable IDs will be kept in batch level datasets created from raw data and statistics.
- cut_long_extrapolate_short_batches
- If True, long batches will be cut to median length and short batches extrapolated to median length
when batch level datasets are created.If False, all data for long batches will be kept and short batches will be padded with missing values.
- keep_whole_batches_in_the_same_cv_group
- If True, all observations belonging to the same batch will be grouped in the same
cross validation group when batch evolution models are fitted.
If False, default CV groups are used (every 7th observation in the same group).
- use_average_batch_for_missing_phases
- If True, average batch will be used for missing phases when batch level datasets are created.
If False, missing values will be used.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class UmMat(Boost.Python.instance) |
|
A row major matrix of floating point numbers.
Can in most cases be treated as a list of UmVec.
The matrix can be made 'jagged' (all rows are not the same length)
but matrices returned by SIMCA will always have rows that are the same length. |
|
- Method resolution order:
- UmMat
- Boost.Python.instance
- builtins.object
Methods defined here:
- __contains__(...)
- __copy__(...)
- __deepcopy__(...)
- __delitem__(...)
- __eq__(...)
- __getitem__(...)
- __init__(...)
- __iter__(...)
- __len__(...)
- __reduce__ = (...)
- __repr__(...)
- __setitem__(...)
- __str__(...)
- append(...)
- extend(...)
Data and other attributes defined here:
- __instance_size__ = 48
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class ValueIDs(Boost.Python.instance) |
|
Identifies values in a ProjectData.
The IDs can be variable names, observation names or arbitrary strings.
If the names are arbitrary strings, the strings must be unique |
|
- Method resolution order:
- ValueIDs
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- __init__( (object)self, (object)ids, (int)id_type, (Project)project) -> None :
ids --- A sequence of strings that are used to identify values in a ProjectData
id_type --- Specifies what 'ids' refer to (see id_type enum).
project --- The project that the ValueIDs instance is to be used with.
- __len__(...)
- __len__( (ValueIDs)arg1) -> int
- __reduce__ = (...)
- get_names(...)
- get_names( (ValueIDs)self [, (int)alias=1]) -> list :
Returns the value names.
alias --- the index of the desired alias (default 1 for primary name)
- size_aliases(...)
- size_aliases( (ValueIDs)self) -> int :
Returns the number of aliases
Data and other attributes defined here:
- id_type = <class 'umetrics.simca.id_type'>
- Used in ValueIDs constructor to specify what the id's refer to.
observation --- Strings are observation names.
variable --- Strings are variable names.
other --- Strings are arbitrary strings.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class Workset(Boost.Python.instance) |
|
Defines the data used to create a model.
The class defines which data to include and scaling, transformations etc.
To create a new unfitted model from the data, call create_model |
|
- Method resolution order:
- Workset
- Boost.Python.instance
- builtins.object
Methods defined here:
- __init__(...)
- Raises an exception
This class cannot be instantiated from Python
- __reduce__ = (...)
- add_dataset(...)
- add_dataset( (Workset)self, (object)dataset) -> None :
Add a new dataset to the workset.
dataset --- Contains the dataset number or name of the dataset to add to the workset.
- create_model(...)
- create_model( (Workset)self [, (bool)check_missing_values=True [, (bool)remove_bad_data=True]]) -> list :
Creates a new model from the current specification.
check_missing_values --- if true, observations/variables that has more missing values
then the limit will be excluded
See set_missing_obs_percent and set_missing_var_percent
remove_bad_data --- if true rows with no variance and/or a lot of missing values are excluded
if false no rows are excluded (check_missing_values parameter is ignored)
Returns a list with the numbers of the new models.
- create_phase_cropper(...)
- create_phase_cropper( (Workset)self) -> PhaseCropper :
Create an object to specify the cropping of batches.
- crop_phases(...)
- crop_phases( (Workset)self, (object)phases, (PhaseCropper)cropper) -> None :
Crop phases in a batch model.
phases --- Contains the phase names.
The parameter is the name of the phase as a string or a list of names
or empty list to get all.
cropper --- A PhaseCropper object to define cropping of the phases.
- exclude_batches(...)
- exclude_batches( (Workset)self, (object)batch) -> None :
Exclude batches from a batch model.
batches --- Contains the batch name or index.
The parameter is the name of the batch as a string or a list of names or the indices
or empty list to get all.
- exclude_obs(...)
- exclude_obs( (Workset)self, (object)observations) -> None :
observations --- Contains the IDs or index of the observations that should be
excluded. The parameter can either be a list of indices
or the observation names.
- exclude_phases(...)
- exclude_phases( (Workset)self, (object)phases) -> None :
Exclude phases from a batch model.
phases --- Contains the phase names.
The parameter is the name of the phase as a string or a list of names
or empty list to get all.
- exclude_variables(...)
- exclude_variables( (Workset)self, (object)variables [, (object)expanded2=-1 [, (object)expanded3=-1 [, (object)Lag=0 [, (object)lag_time_variable=[] [, (object)lag_speed_variable=[]]]]]]) -> None :
Excludes a variable
variables --- Contains the IDs of the variables that should be excluded
from the workset. The parameter can either be a list of indices
or the variable names or empty list to get all.expanded2 --- The second variable in an expanded term only one at a time
-1 or empty string if it is not used
expanded3 --- The third variable in an expanded term only one at a time
-1 or empty string if it is not used
Lags --- The number of steps or the distance the variable is lagged or a list of steps or distances if expanded terms are supplied.
--- default 0, if lag_distance_variable is supplied Lags is the dist else it is the number of stepslag_time_variable --- The name of the time variable used to calculate dynamic lags.
The parameter can either be a list of indices or the variable names empty for lag by steps otherwise the same length as Lag
lag_speed_variable --- The name of the speed variable used to calculate dynamic lags.
The parameter can either be a list of indices or the variable names empty for lag by steps otherwise the same length as Lag
- get_class_phase_names(...)
- get_class_phase_names( (Workset)self) -> list :
return a list with the names of the used classes or phases in this workset
- get_data_collection(...)
- get_data_collection( (Workset)self) -> DataCollection :
Returns the data collection used as source for this model.
- get_included_obs(...)
- get_included_obs( (Workset)self) -> list :
return a list with indexes of included observations
- get_missing_obs_percent(...)
- get_missing_obs_percent( (Workset)self) -> float :
Get how many percent missing that is allowed for observations.
Returns a value between 0 and 100
- get_missing_var_percent(...)
- get_missing_var_percent( (Workset)self) -> float :
Get how many percent missing that is allowed for variables.
Returns a value between 0 and 100
- get_obs_class_phase_numbers(...)
- get_obs_class_phase_numbers( (Workset)self) -> list :
return a list with the class or phase number for each observation
excluded observations are also returned by this function
- get_transform_constants(...)
- get_transform_constants( (Workset)self, (object)variable) -> list :
returns a list with the transform constants a, b and c for a variable.
See transformtype enum for more information
variable --- Contains the index or name of a variable.
- get_type(...)
- get_type( (Workset)self) -> modeltype :
Get the type of the new model.
Returns the type of the model to be created. See "modeltype" for different types.
- get_variable_class(...)
- get_variable_class( (Workset)self, (object)variable) -> list :
Gets the class a variable belongs to
variable --- Contains the ID of the variable that should be get class names for.
The parameter can either be an index
or the variable name or empty list to get all.
Returns a list of class names for the variable
- get_variable_phase(...)
- get_variable_phase( (Workset)self, (object)variables) -> list :
Gets the phase a variable belongs to
variable --- Contains the ID of the variable that should be get class names for.
The parameter can either be an index
or the variable name or empty list to get all.
Returns a list of names of phases that a variable belongs to.
- get_variable_scale_block(...)
- get_variable_scale_block( (Workset)self, (object)variable) -> int :
returns the scaling block number for a variable.
variable --- Contains the index or name of a variable.
- get_variable_scale_block_weight(...)
- get_variable_scale_block_weight( (Workset)self, (object)variable) -> blockscaletype :
returns the block scaling weight for a variable.
See blockscaletype enum for return value types
variable --- Contains the index or name of a variable.
- get_variable_scale_modifier(...)
- get_variable_scale_modifier( (Workset)self, (object)variable) -> float :
returns the scale modifier for a variable.
variable --- Contains the index or name of a variable.
- get_variable_scale_type(...)
- get_variable_scale_type( (Workset)self, (object)variable) -> scaletype :
returns the base scale type for a variable.
See scaletype enum for return value types
variable --- Contains the index or name of a variable.
- get_variable_transform(...)
- get_variable_transform( (Workset)self, (object)variable) -> transformtype :
returns the transform for a variable.
See transformtype enum for return value types
variable --- Contains the index or name of a variable.
- get_y_treatment(...)
- get_y_treatment( (Workset)self, (object)variables) -> list :
For a batch evolution model, get info of how the Y variable is treated
variables --- Contains the IDs of the variables that should be tested.
The parameter can either be a list of indices
or variable names.
Returns a list of Y treatments for the variables
See ytreatment enum for different options.
- include_batches(...)
- include_batches( (Workset)self, (object)batches) -> None :
Include batches from a batch model.
batches --- Contains the batch name or index.
The parameter is the name of the batch as a string or a list of names or the indices
or empty list to get all.
- include_obs(...)
- include_obs( (Workset)self, (object)observations) -> None :
observations --- Contains the IDs or index of the observations that should be
included. The parameter can either be a list of indices
or the observation names.
- include_phases(...)
- include_phases( (Workset)self, (object)phases) -> None :
Include phases that has been excluded or cropped in a batch model.
phases --- Contains the phase names.
The parameter is the name of the phase as a string or a list of names
or empty list to get all.
- is_batch_excluded(...)
- is_batch_excluded( (Workset)self, (object)batches) -> list :
Returns if a batch is excluded or not.
batches --- Contains the batch name or index.
The parameter is the name of the batch as a string or a list of names or indices.
or empty list to get all.
Returns a list with True for excluded batches, otherwise False
- is_phase_cropped(...)
- is_phase_cropped( (Workset)self, (object)phases) -> list :
Returns if a phase is cropped.
phases --- Contains the phase names.
The parameter is the name of the phase as a string or a list of names
or empty list to get all.
Returns a list with True for cropped phases, otherwise False
- is_phase_excluded(...)
- is_phase_excluded( (Workset)self, (object)phases) -> list :
Returns if a phase is excluded or not.
phases --- Contains the phase names.
The parameter is the name of the phase as a string or a list of names
or empty list to get all.
Returns a list with True for excluded phases, otherwise False
- is_variable_excluded(...)
- is_variable_excluded( (Workset)self, (object)variables) -> list :
Test if a variable is excluded
variables --- Contains the IDs of the variables to test if excluded
from the workset. The parameter can either be a list of indices
or the variable names or empty list to get all.
Returns a list with True if excluded otherwise False
- is_x(...)
- is_x( (Workset)self, (object)variables [, (object)expanded2=-1 [, (object)expanded3=-1 [, (object)Lag=0 [, (object)lag_time_variable=[] [, (object)lag_speed_variable=[]]]]]]) -> list :
Returns if a variable is X or Y
variables --- Contains the IDs of the variables that should be checked if
X variables. The parameter can either be a list of indices
or the variable names or empty list to get all.
expanded2 --- The second variable in an expanded term only one at a time
-1 or empty string if it is not used
expanded3 --- The third variable in an expanded term only one at a time
-1 or empty string if it is not used
Lags --- The number of steps or the distance the variable is lagged or a list of steps or distances if expanded terms are supplied.
--- default 0, if lag_distance_variable is supplied Lags is the dist else it is the number of stepslag_time_variable --- The name of the time variable used to calculate dynamic lags.
The parameter can either be a list of indices or the variable names empty for lag by steps otherwise the same length as Lag
lag_speed_variable --- The name of the speed variable used to calculate dynamic lags.
The parameter can either be a list of indices or the variable names empty for lag by steps otherwise the same length as Lag
Returns a list with true if X or false if Y or excluded
- recreate_batch_level(...)
- recreate_batch_level( (Workset)self [, (bool)fullrecreate=True [, (bool)check_missing_values=True [, (bool)remove_bad_data=True]]]) -> tuple :
Tries to recreate dependent batch level datasets and models.
This method can only be used if the workset is a batch-evolution workset and
was created by calling umetrics.simca.Project.new_as_workset() or umetrics.simca.Project.edit_workset()
The batch evolution model(s) and any recreated models will be auto-fitted.
fullrecreate --- if true, all batch level datasets and models will be recreated and fitted
the same aligned maturity vector is reused, this is not recommended if batches that differs a lot in length are added
if false, the batch level datasets are recreated and a unfitted default batch level model is created
check_missing_values --- if true, observations/variables that has more missing values
then the limit will be excluded
See set_missing_obs_percent and set_missing_var_percent
remove_bad_data --- if true rows with no variance and/or a lot of missing values are excluded
if false no rows are excluded (check_missing_values parameter is ignored)
Returns a list with the numbers of the new models.
The method returns a tuple containing:
- A string containing information of problems encountered when the datasets/models was recreated.
If no problems was encountered the string is empty.
- A list containing the numbers of the recreated models.
- A list containing the numbers of the recreated datasets.
- set_frozen(...)
- set_frozen( (Workset)self, (float)weight, (float)offset, (object)variables [, (object)phase=None]) -> None :
Sets the scaling weight and offset for the specified variable(s)
The weight or offset can be set to NaN (float('NaN')) to indicate that they should be calculated
from the data. If weight is set to NaN, the unit variance weight will be used. If offset is set
to NaN, the mean will be used.
The method will set the scale type to frozen.
offset --- The offset that will be subtracted from the variable(s) when the workset is scaled.
weight --- The weight that the variable(s) will be multiplied with when the
workset is scaled.
variables --- The IDs of the variable(s).
It can an either be an index, a variable name, a list of indices or names
or an empty list to set all.
phase --- The name of a phase. If this parameter is set, only observations belonging to
that phase will be scaled with the weight and offset.
- set_min_non_median_values(...)
- set_min_non_median_values( (Workset)self, (int)values) -> None :
Set the minimum number of non median values option.
A value >=0
- set_missing_obs_percent(...)
- set_missing_obs_percent( (Workset)self, (float)percent) -> None :
Set how many percent missing that is allowed for observations.
A value between 0 and 100
- set_missing_var_percent(...)
- set_missing_var_percent( (Workset)self, (float)percent) -> None :
Set how many percent missing that is allowed for variables.
A value between 0 and 100
- set_obs_as_model(...)
- set_obs_as_model( (Workset)self, (int)model [, (bool)asmother=False]) -> None :
Copy the observation settings from another model
model --- the number of the model to copy from
asmother if true, the mother workset will be copied ie all classes/phases
if false, only the current class model will be copied
ignored for non class/phase models
- set_obs_class(...)
- set_obs_class( (Workset)self, (object)observations, (int)classnum [, (str)classname='']) -> None :
observations --- Contains the IDs or index of the observations that should be
set. The parameter can either be a list of indices
or the observation names.
classnum the number of the class, should be > 0
classname the name of the class. If the workset already contains a class with the same
number, it will be renamed. If classname is omitted classnum will be used as name
- set_type(...)
- set_type( (Workset)self, (modeltype)modeltype) -> None :
Set the type for the new model.
modeltype --- The type of the model to set, e.g. PLS
PCA, OPLS-Class etc. See umetrics.simca.modeltype enum
for available model types.
- set_var_as_model(...)
- set_var_as_model( (Workset)self, (int)model) -> None :
Copy the variable settings from another model
model --- The model number to copy settings from.
- set_variable_class(...)
- set_variable_class( (Workset)self, (object)variables, (object)classes) -> None :
Set the class a variable should belong to
variables --- Contains the IDs of the variables that should be set to a class
The parameter can either be a list of indices
or the variable names or empty list to get all.
classes --- Contains the class numbers.
The parameter can either be a positive integer or a list of integers
- set_variable_phase(...)
- set_variable_phase( (Workset)self, (object)variables, (object)phases) -> None :
Set the phase a variable should belong to
variables --- Contains the IDs of the variables that should be set to a class
The parameter can either be a list of indices
or the variable names or empty list to get all.
phases --- Contains the phase names.
The parameter is the name of the phase as a string or a list of names
or empty list to get all.
- set_variable_scale_block_and_weight(...)
- set_variable_scale_block_and_weight( (Workset)self, (object)variables, (int)block, (blockscaletype)blockscaletype) -> None :
Block-wise variable scaling.
variables --- The IDs of the variable(s).
It can an either be an index, a variable name, a list of indices or names
or an empty list to set all.
block --- The block number.
If the block number is zero, the blockscaletype is ignored.
- set_variable_scale_modifier(...)
- set_variable_scale_modifier( (Workset)self, (object)variables, (float)modifier) -> None :
Scaling variables up or down relative to their base weight.
variables --- Contains the IDs of the variables that should be changed.
The parameter can either be a list of indices
or the variable names or empty list to get all.
modifier --- The value to modify the variable scaling relative to its base weight.
- set_variable_scale_type(...)
- set_variable_scale_type( (Workset)self, (object)variables, (scaletype)scaletype) -> None :
Base type variable scaling .
variables --- Contains the IDs of the variables that should be changed.
The parameter can either be a list of indices
or the variable names or empty list to get all.
scaletype --- Describes how the variable should be scaled.
See scaletype enum for different options.
- set_variable_transform(...)
- set_variable_transform( (Workset)self, (object)variables, (transformtype)transformtype, (float)a, (float)b, (float)c) -> None :
Set the transform for the selected variables .
variables --- The IDs of the variable(s).
It can an either be an index, a variable name, a list of indices or names
or an empty list to set all.
- set_x(...)
- set_x( (Workset)self, (object)variables [, (object)expanded2=-1 [, (object)expanded3=-1 [, (object)Lag=0 [, (object)lag_time_variable=[] [, (object)lag_speed_variable=[]]]]]]) -> None :
Sets a variable to X
variables --- Contains the IDs of the variables that should be set as
X variables. The parameter can either be a list of indices
or the variable names or empty list to get all.
expanded2 --- The second variable in an expanded term only one at a time
-1 or empty string if it is not used
expanded3 --- The third variable in an expanded term only one at a time
-1 or empty string if it is not used
Lags --- The number of steps or the distance the variable is lagged or a list of steps or distances if expanded terms are supplied.
--- default 0, if lag_distance_variable is supplied Lags is the dist else it is the number of steps
lag_time_variable --- The name of the time variable used to calculate dynamic lags.
The parameter can either be a list of indices or the variable names empty for lag by steps otherwise the same length as Lag
lag_speed_variable --- The name of the speed variable used to calculate dynamic lags.
The parameter can either be a list of indices or the variable names empty for lag by steps otherwise the same length as Lag
- set_y(...)
- set_y( (Workset)self, (object)variables [, (object)Lag=0 [, (object)lag_time_variable=[] [, (object)lag_speed_variable=[]]]]) -> None :
Sets a variable to Y
variables --- Contains the IDs of the variables that should be set as
Y variables. The parameter can either be a list of indices
or the variable names or empty list to get all.
Lags --- The number of steps or the distance the variable is lagged (only one variable).
--- default 0, if lag_distance_variable is supplied Lags is the dist else it is the number of steps
lag_time_variable --- The name of the time variable used to calculate dynamic lags.
The parameter can either be a list of indices or the variable names empty for lag by steps otherwise the same length as Lag
lag_speed_variable --- The name of the speed variable used to calculate dynamic lags.
The parameter can either be a list of indices or the variable names empty for lag by steps otherwise the same length as Lag
- set_y_treatment(...)
- set_y_treatment( (Workset)self, (object)variables, (ytreatment)treatment) -> None :
For a batch evolution model, change how Y variable is treated
variables --- Contains the IDs of the Y variables that should be changed.
The parameter can either be a list of indices
or the variable names or empty list to get all.
ytreatment --- Describes how the Y variable should be treated.
See ytreatment enum for different options.
Methods inherited from Boost.Python.instance:
- __new__(*args, **kwargs) from Boost.Python.class
- Create and return a new object. See help(type) for accurate signature.
Data descriptors inherited from Boost.Python.instance:
- __dict__
- __weakref__
|
class blockscaletype(Boost.Python.enum) |
|
Block scaling types
none --- The variable is not block scaled.
squareroot --- Scale by the inverse of the square root of the number of variables in the block.
fourthroot --- Scale by the inverse of the fourth root of the number of variables in the block. |
|
- Method resolution order:
- blockscaletype
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- fourthroot = umetrics.simca.blockscaletype.fourthroot
- names = {'fourthroot': umetrics.simca.blockscaletype.fourthroot, 'none': umetrics.simca.blockscaletype.none, 'squareroot': umetrics.simca.blockscaletype.squareroot}
- none = umetrics.simca.blockscaletype.none
- squareroot = umetrics.simca.blockscaletype.squareroot
- values = {0: umetrics.simca.blockscaletype.none, 1: umetrics.simca.blockscaletype.squareroot, 2: umetrics.simca.blockscaletype.fourthroot}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class compressionmethod(Boost.Python.enum) |
|
Wavelet filter - desired decomposition and Compression method.
dwt --- Discrete wavelet transform. Recommended for low frequency signals.
bestbasis --- Recommended for high frequency signals. |
|
- Method resolution order:
- compressionmethod
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- bestbasis = umetrics.simca.compressionmethod.bestbasis
- dwt = umetrics.simca.compressionmethod.dwt
- names = {'bestbasis': umetrics.simca.compressionmethod.bestbasis, 'dwt': umetrics.simca.compressionmethod.dwt}
- values = {0: umetrics.simca.compressionmethod.dwt, 1: umetrics.simca.compressionmethod.bestbasis}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class derivativeorder(Boost.Python.enum) |
|
Different derivative orders. Only valid for derivative filters.
firstderivative --- First derivasetctive order.
secondderivative --- Second, or third derivative order.
thirdderivative --- Third derivative order. |
|
- Method resolution order:
- derivativeorder
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- firstderivative = umetrics.simca.derivativeorder.firstderivative
- names = {'firstderivative': umetrics.simca.derivativeorder.firstderivative, 'secondderivative': umetrics.simca.derivativeorder.secondderivative, 'thirdderivative': umetrics.simca.derivativeorder.thirdderivative}
- secondderivative = umetrics.simca.derivativeorder.secondderivative
- thirdderivative = umetrics.simca.derivativeorder.thirdderivative
- values = {1: umetrics.simca.derivativeorder.firstderivative, 2: umetrics.simca.derivativeorder.secondderivative, 3: umetrics.simca.derivativeorder.thirdderivative}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class detrendmode(Boost.Python.enum) |
|
Wavelet filter detrend mode.
none --- No detrend.
mean --- Removes the mean.
linear --- Removes the best linear fit. |
|
- Method resolution order:
- detrendmode
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- linear = umetrics.simca.detrendmode.linear
- mean = umetrics.simca.detrendmode.mean
- names = {'linear': umetrics.simca.detrendmode.linear, 'mean': umetrics.simca.detrendmode.mean, 'none': umetrics.simca.detrendmode.none}
- none = umetrics.simca.detrendmode.none
- values = {0: umetrics.simca.detrendmode.none, 1: umetrics.simca.detrendmode.mean, 2: umetrics.simca.detrendmode.linear}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class ewma_type(Boost.Python.enum) |
|
Different EWMA types.
filter --- Filter EWMA.
predictive --- Predictive EWMA. |
|
- Method resolution order:
- ewma_type
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- filter = umetrics.simca.ewma_type.filter
- names = {'filter': umetrics.simca.ewma_type.filter, 'predictive': umetrics.simca.ewma_type.predictive}
- predictive = umetrics.simca.ewma_type.predictive
- values = {0: umetrics.simca.ewma_type.filter, 1: umetrics.simca.ewma_type.predictive}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class filter_scaletype(Boost.Python.enum) |
|
Filter scaling types.
none --- No scaling.
center --- Scaled by the center.
pareto --- Scaled by the inverse of the square root of the standard deviation.
unitvariance --- Scaled by the inverse of the standard deviation. |
|
- Method resolution order:
- filter_scaletype
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- center = umetrics.simca.filter_scaletype.center
- names = {'center': umetrics.simca.filter_scaletype.center, 'none': umetrics.simca.filter_scaletype.none, 'pareto': umetrics.simca.filter_scaletype.pareto, 'unitvariance': umetrics.simca.filter_scaletype.unitvariance}
- none = umetrics.simca.filter_scaletype.none
- pareto = umetrics.simca.filter_scaletype.pareto
- unitvariance = umetrics.simca.filter_scaletype.unitvariance
- values = {1: umetrics.simca.filter_scaletype.none, 2: umetrics.simca.filter_scaletype.center, 3: umetrics.simca.filter_scaletype.pareto, 4: umetrics.simca.filter_scaletype.unitvariance}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class filtertype(Boost.Python.enum) |
|
Different spectral filter types
derivatives --- Applying derivatives transforms the dataset from the original domain to the first, second, or third derivate.
msc --- Multiplicative signal correction.
snv --- Standard normal variate.
rowcenter --- Applying the rowcenter filter subtracts the row mean from each row value.
savitzky --- Savitzky-Golay. A filter that removes noise by applying a moving polynomial to the data.
EWMA --- Exponentially weighted moving average.
wcs --- Wavelet compression spectral.
wds --- Wavelet denoising spectral.
osc --- Orthogonal signal correction.
wcts --- Wavelet compress time series.
wdts --- Wavelet denoising/decimation time series.
plugin --- Plugin. |
|
- Method resolution order:
- filtertype
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- EWMA = umetrics.simca.filtertype.EWMA
- derivatives = umetrics.simca.filtertype.derivatives
- msc = umetrics.simca.filtertype.msc
- names = {'EWMA': umetrics.simca.filtertype.EWMA, 'derivatives': umetrics.simca.filtertype.derivatives, 'msc': umetrics.simca.filtertype.msc, 'osc': umetrics.simca.filtertype.osc, 'plugin': umetrics.simca.filtertype.plugin, 'rowcenter': umetrics.simca.filtertype.rowcenter, 'savitzky': umetrics.simca.filtertype.savitzky, 'snv': umetrics.simca.filtertype.snv, 'wcs': umetrics.simca.filtertype.wcs, 'wcts': umetrics.simca.filtertype.wcts, ...}
- osc = umetrics.simca.filtertype.osc
- plugin = umetrics.simca.filtertype.plugin
- rowcenter = umetrics.simca.filtertype.rowcenter
- savitzky = umetrics.simca.filtertype.savitzky
- snv = umetrics.simca.filtertype.snv
- values = {0: umetrics.simca.filtertype.snv, 1: umetrics.simca.filtertype.msc, 2: umetrics.simca.filtertype.derivatives, 3: umetrics.simca.filtertype.savitzky, 4: umetrics.simca.filtertype.rowcenter, 5: umetrics.simca.filtertype.EWMA, 6: umetrics.simca.filtertype.osc, 7: umetrics.simca.filtertype.wcs, 8: umetrics.simca.filtertype.wds, 9: umetrics.simca.filtertype.wcts, ...}
- wcs = umetrics.simca.filtertype.wcs
- wcts = umetrics.simca.filtertype.wcts
- wds = umetrics.simca.filtertype.wds
- wdts = umetrics.simca.filtertype.wdts
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class hierarchical_types(Boost.Python.enum) |
|
Different hierarchical model types
none ---
scores --- Scores.
residuals --- Residuals.
y_orthogonal_res --- X residuals orthogonal to Y for O2PLS models.
y_related_res --- X residuals related to Y for O2PLS models.
y_orthogonal_scores ---
y_pred --- predicted y variables |
|
- Method resolution order:
- hierarchical_types
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- names = {'none': umetrics.simca.hierarchical_types.none, 'residuals': umetrics.simca.hierarchical_types.residuals, 'scores': umetrics.simca.hierarchical_types.scores, 'y_orthogonal_res': umetrics.simca.hierarchical_types.y_orthogonal_res, 'y_orthogonal_scores': umetrics.simca.hierarchical_types.y_orthogonal_scores, 'y_pred': umetrics.simca.hierarchical_types.y_pred, 'y_related_res': umetrics.simca.hierarchical_types.y_related_res}
- none = umetrics.simca.hierarchical_types.none
- residuals = umetrics.simca.hierarchical_types.residuals
- scores = umetrics.simca.hierarchical_types.scores
- values = {0: umetrics.simca.hierarchical_types.none, 1: umetrics.simca.hierarchical_types.scores, 2: umetrics.simca.hierarchical_types.residuals, 4: umetrics.simca.hierarchical_types.y_orthogonal_res, 8: umetrics.simca.hierarchical_types.y_related_res, 16: umetrics.simca.hierarchical_types.y_orthogonal_scores, 32: umetrics.simca.hierarchical_types.y_pred}
- y_orthogonal_res = umetrics.simca.hierarchical_types.y_orthogonal_res
- y_orthogonal_scores = umetrics.simca.hierarchical_types.y_orthogonal_scores
- y_pred = umetrics.simca.hierarchical_types.y_pred
- y_related_res = umetrics.simca.hierarchical_types.y_related_res
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class modeltype(Boost.Python.enum) |
|
Different model types
A model can be one of these different types.
pcaX --- PCA model on X variables
pcaY --- PCA model on Y variables
pcaXY --- PCA model on both X and Y variables
pcaClass --- PLS model with classes in the X block
pls --- Model with PLS relationship between
X and Y variables
plsDa --- Model as PLS discriminant analysis
plsClass --- PLS model with classes in the X and Y block
opls --- Model with OPLS relationship between
X and Y variables to separate systematic
variation into predictive and orthogonal Y
oplsDa --- Model as OPLS discriminant analysis
oplsClass --- OPLS model with classes in the X and Y block
o2pls --- Model with O2PLS relationship between
X and Y variables to separate systematic
variation into predictive and orthogonal Y
o2plsDa --- Model as O2PLS discriminant analysis
o2plsClass --- O2PLS model with classes in the X and Y block
default --- Use the default type depending on workset data |
|
- Method resolution order:
- modeltype
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- default = umetrics.simca.modeltype.default
- names = {'default': umetrics.simca.modeltype.default, 'o2pls': umetrics.simca.modeltype.o2pls, 'o2plsClass': umetrics.simca.modeltype.o2plsClass, 'o2plsDa': umetrics.simca.modeltype.o2plsDa, 'opls': umetrics.simca.modeltype.opls, 'oplsClass': umetrics.simca.modeltype.oplsClass, 'oplsDa': umetrics.simca.modeltype.oplsDa, 'pcaClass': umetrics.simca.modeltype.pcaClass, 'pcaX': umetrics.simca.modeltype.pcaX, 'pcaXY': umetrics.simca.modeltype.pcaXY, ...}
- o2pls = umetrics.simca.modeltype.o2pls
- o2plsClass = umetrics.simca.modeltype.o2plsClass
- o2plsDa = umetrics.simca.modeltype.o2plsDa
- opls = umetrics.simca.modeltype.opls
- oplsClass = umetrics.simca.modeltype.oplsClass
- oplsDa = umetrics.simca.modeltype.oplsDa
- pcaClass = umetrics.simca.modeltype.pcaClass
- pcaX = umetrics.simca.modeltype.pcaX
- pcaXY = umetrics.simca.modeltype.pcaXY
- pcaY = umetrics.simca.modeltype.pcaY
- pls = umetrics.simca.modeltype.pls
- plsClass = umetrics.simca.modeltype.plsClass
- plsDa = umetrics.simca.modeltype.plsDa
- values = {0: umetrics.simca.modeltype.pcaX, 1: umetrics.simca.modeltype.pcaY, 2: umetrics.simca.modeltype.pcaXY, 3: umetrics.simca.modeltype.pcaClass, 4: umetrics.simca.modeltype.pls, 5: umetrics.simca.modeltype.plsDa, 6: umetrics.simca.modeltype.plsClass, 7: umetrics.simca.modeltype.opls, 8: umetrics.simca.modeltype.oplsDa, 9: umetrics.simca.modeltype.oplsClass, ...}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class operator(Boost.Python.enum) |
|
Operator how to compare a variable to a value
smaller --- The variable is smaller than given value.
larger --- The variable is larger than given value.
smallerOrEqual --- The variable is smaller or equal than given value.
largerOrEqual --- The variable is larger or equal than given value
outside --- The variable is outside min and max values |
|
- Method resolution order:
- operator
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- larger = umetrics.simca.operator.larger
- largerOrEqual = umetrics.simca.operator.largerOrEqual
- names = {'larger': umetrics.simca.operator.larger, 'largerOrEqual': umetrics.simca.operator.largerOrEqual, 'outside': umetrics.simca.operator.outside, 'smaller': umetrics.simca.operator.smaller, 'smallerOrEqual': umetrics.simca.operator.smallerOrEqual}
- outside = umetrics.simca.operator.outside
- smaller = umetrics.simca.operator.smaller
- smallerOrEqual = umetrics.simca.operator.smallerOrEqual
- values = {0: umetrics.simca.operator.smaller, 1: umetrics.simca.operator.larger, 2: umetrics.simca.operator.smallerOrEqual, 3: umetrics.simca.operator.largerOrEqual, 4: umetrics.simca.operator.outside}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class phase_iteration_treatment(Boost.Python.enum) |
|
Enum for which phase iterations to use when creating batch level
use_all_rows --- Compare phase iterations within a phase, creates one row per phase iteration and batch.
average_iteration --- Use an average of all phase iterations.
last_iteration --- Use the last phase iteration.
all_iterations --- Use all phase iterations.
first_iterations --- Use the first phase iterations. |
|
- Method resolution order:
- phase_iteration_treatment
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- all_iterations = umetrics.simca.phase_iteration_treatment.all_iterations
- average_iteration = umetrics.simca.phase_iteration_treatment.average_iteration
- first_iterations = umetrics.simca.phase_iteration_treatment.first_iterations
- last_iteration = umetrics.simca.phase_iteration_treatment.last_iteration
- names = {'all_iterations': umetrics.simca.phase_iteration_treatment.all_iterations, 'average_iteration': umetrics.simca.phase_iteration_treatment.average_iteration, 'first_iterations': umetrics.simca.phase_iteration_treatment.first_iterations, 'last_iteration': umetrics.simca.phase_iteration_treatment.last_iteration, 'use_all_rows': umetrics.simca.phase_iteration_treatment.use_all_rows}
- use_all_rows = umetrics.simca.phase_iteration_treatment.use_all_rows
- values = {-5: umetrics.simca.phase_iteration_treatment.all_iterations, -3: umetrics.simca.phase_iteration_treatment.last_iteration, -2: umetrics.simca.phase_iteration_treatment.average_iteration, -1: umetrics.simca.phase_iteration_treatment.use_all_rows, 0: umetrics.simca.phase_iteration_treatment.first_iterations}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class polynomialorder(Boost.Python.enum) |
|
Different polynomial orders. Only valid for derivative filters.
quadratic --- Quadratic polynomial order.
cubic --- Cubic polynomial order. |
|
- Method resolution order:
- polynomialorder
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- cubic = umetrics.simca.polynomialorder.cubic
- names = {'cubic': umetrics.simca.polynomialorder.cubic, 'quadratic': umetrics.simca.polynomialorder.quadratic}
- quadratic = umetrics.simca.polynomialorder.quadratic
- values = {2: umetrics.simca.polynomialorder.quadratic, 3: umetrics.simca.polynomialorder.cubic}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class powerexponent(Boost.Python.enum) |
|
The powerexponent for power transform filter type.
invsquare --- -2
inverse --- -1
invsquareroot --- -0.5
invfourthroot --- -0.25
fourthroot --- 0.25
squareroot --- 0.5
identity --- 1
square --- 2 |
|
- Method resolution order:
- powerexponent
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- fourthroot = umetrics.simca.powerexponent.fourthroot
- identity = umetrics.simca.powerexponent.identity
- inverse = umetrics.simca.powerexponent.inverse
- invfourthroot = umetrics.simca.powerexponent.invfourthroot
- invsquare = umetrics.simca.powerexponent.invsquare
- invsquareroot = umetrics.simca.powerexponent.invsquareroot
- names = {'fourthroot': umetrics.simca.powerexponent.fourthroot, 'identity': umetrics.simca.powerexponent.identity, 'inverse': umetrics.simca.powerexponent.inverse, 'invfourthroot': umetrics.simca.powerexponent.invfourthroot, 'invsquare': umetrics.simca.powerexponent.invsquare, 'invsquareroot': umetrics.simca.powerexponent.invsquareroot, 'square': umetrics.simca.powerexponent.square, 'squareroot': umetrics.simca.powerexponent.squareroot}
- square = umetrics.simca.powerexponent.square
- squareroot = umetrics.simca.powerexponent.squareroot
- values = {0: umetrics.simca.powerexponent.invsquare, 1: umetrics.simca.powerexponent.inverse, 2: umetrics.simca.powerexponent.invsquareroot, 3: umetrics.simca.powerexponent.invfourthroot, 4: umetrics.simca.powerexponent.fourthroot, 5: umetrics.simca.powerexponent.squareroot, 6: umetrics.simca.powerexponent.identity, 7: umetrics.simca.powerexponent.square}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class prediction_source(Boost.Python.enum) |
|
Different sources that can be used for creating a predictionset
workset --- Take observations that is included in the workset.
complement --- Take observation or batches that isn't included in the workset.
classnumber --- Take observations that is included in a certain class in the workset.
dataset --- Take observations from a dataset.
predictionset --- Take observations from another predictionset. |
|
- Method resolution order:
- prediction_source
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- classnumber = umetrics.simca.prediction_source.classnumber
- complement = umetrics.simca.prediction_source.complement
- dataset = umetrics.simca.prediction_source.dataset
- names = {'classnumber': umetrics.simca.prediction_source.classnumber, 'complement': umetrics.simca.prediction_source.complement, 'dataset': umetrics.simca.prediction_source.dataset, 'predictionset': umetrics.simca.prediction_source.predictionset, 'workset': umetrics.simca.prediction_source.workset}
- predictionset = umetrics.simca.prediction_source.predictionset
- values = {0: umetrics.simca.prediction_source.workset, 1: umetrics.simca.prediction_source.complement, 2: umetrics.simca.prediction_source.dataset, 3: umetrics.simca.prediction_source.classnumber, 4: umetrics.simca.prediction_source.predictionset}
- workset = umetrics.simca.prediction_source.workset
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class scaletype(Boost.Python.enum) |
|
Different scale types for variables
none --- No centering or scaling.
uv --- Centered and scaled to unit variance.
uvn --- Scaled to unit variance (not centered).
par --- Centered and scaled to pareto variance.
parn --- Scaled to pareto variance (not centered).
ctr --- Centered but not scaled.
freeze --- The scaling weight of the variable is frozen and will not be re-computed when observations in the workset change or the variable metric is modified after the freezing.
lag --- Scaling reserved for lagged variables (same as it's mother's scaling) cannot be set
percentofmean --- scale with a weight calculated from % of mean for the variable, cannot be set by scripts |
|
- Method resolution order:
- scaletype
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- ctr = umetrics.simca.scaletype.ctr
- freeze = umetrics.simca.scaletype.freeze
- lag = umetrics.simca.scaletype.lag
- names = {'ctr': umetrics.simca.scaletype.ctr, 'freeze': umetrics.simca.scaletype.freeze, 'lag': umetrics.simca.scaletype.lag, 'none': umetrics.simca.scaletype.none, 'par': umetrics.simca.scaletype.par, 'parn': umetrics.simca.scaletype.parn, 'percentofmean': umetrics.simca.scaletype.percentofmean, 'uv': umetrics.simca.scaletype.uv, 'uvn': umetrics.simca.scaletype.uvn}
- none = umetrics.simca.scaletype.none
- par = umetrics.simca.scaletype.par
- parn = umetrics.simca.scaletype.parn
- percentofmean = umetrics.simca.scaletype.percentofmean
- uv = umetrics.simca.scaletype.uv
- uvn = umetrics.simca.scaletype.uvn
- values = {0: umetrics.simca.scaletype.none, 1: umetrics.simca.scaletype.uv, 2: umetrics.simca.scaletype.uvn, 3: umetrics.simca.scaletype.par, 4: umetrics.simca.scaletype.parn, 5: umetrics.simca.scaletype.ctr, 6: umetrics.simca.scaletype.freeze, 7: umetrics.simca.scaletype.lag, 8: umetrics.simca.scaletype.percentofmean}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class transformtype(Boost.Python.enum) |
|
Filter transform types.
none --- The variable is not transformed.
linear --- Linear transform. a*X+b
log --- Log10 transform. log(a*X+b)
neglog --- Negative log10 transform. -log(a*X+b)
logit --- log10((X-a)/(b-X))
exp --- Natural exponent. e^(a*X+b)
power --- (a*X+b)^C |
|
- Method resolution order:
- transformtype
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- exp = umetrics.simca.transformtype.exp
- linear = umetrics.simca.transformtype.linear
- log = umetrics.simca.transformtype.log
- logit = umetrics.simca.transformtype.logit
- names = {'exp': umetrics.simca.transformtype.exp, 'linear': umetrics.simca.transformtype.linear, 'log': umetrics.simca.transformtype.log, 'logit': umetrics.simca.transformtype.logit, 'neglog': umetrics.simca.transformtype.neglog, 'none': umetrics.simca.transformtype.none, 'power': umetrics.simca.transformtype.power}
- neglog = umetrics.simca.transformtype.neglog
- none = umetrics.simca.transformtype.none
- power = umetrics.simca.transformtype.power
- values = {1: umetrics.simca.transformtype.none, 2: umetrics.simca.transformtype.linear, 3: umetrics.simca.transformtype.log, 4: umetrics.simca.transformtype.neglog, 5: umetrics.simca.transformtype.logit, 6: umetrics.simca.transformtype.exp, 7: umetrics.simca.transformtype.power}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class waveletfunctiontype(Boost.Python.enum) |
|
Wavelet filter function type.
beylkin --- Wavelet order: N/A.
coiflet --- Wavelet order: 2, 3, 4, 5.
daubechies --- Wavelet order: 4, 6, 8, 10, 12, 20, 50.
symmlet --- Wavelet order: 4, 6, 8, 10.
biorthogonal1 --- Wavelet order: 1, 3, 5.
biorthogonal2 --- Wavelet order: 2, 4, 6, 8.
biorthogonal3 --- Wavelet order: 1, 3, 5, 7, 9.
biorthogonal4 --- Wavelet order: 4.
biorthogonal5 --- Wavelet order: 5.
biorthogonal6 --- Wavelet order: 8. |
|
- Method resolution order:
- waveletfunctiontype
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- beylkin = umetrics.simca.waveletfunctiontype.beylkin
- biorthogonal1 = umetrics.simca.waveletfunctiontype.biorthogonal1
- biorthogonal2 = umetrics.simca.waveletfunctiontype.biorthogonal2
- biorthogonal3 = umetrics.simca.waveletfunctiontype.biorthogonal3
- biorthogonal4 = umetrics.simca.waveletfunctiontype.biorthogonal4
- biorthogonal5 = umetrics.simca.waveletfunctiontype.biorthogonal5
- biorthogonal6 = umetrics.simca.waveletfunctiontype.biorthogonal6
- coiflet = umetrics.simca.waveletfunctiontype.coiflet
- daubechies = umetrics.simca.waveletfunctiontype.daubechies
- names = {'beylkin': umetrics.simca.waveletfunctiontype.beylkin, 'biorthogonal1': umetrics.simca.waveletfunctiontype.biorthogonal1, 'biorthogonal2': umetrics.simca.waveletfunctiontype.biorthogonal2, 'biorthogonal3': umetrics.simca.waveletfunctiontype.biorthogonal3, 'biorthogonal4': umetrics.simca.waveletfunctiontype.biorthogonal4, 'biorthogonal5': umetrics.simca.waveletfunctiontype.biorthogonal5, 'biorthogonal6': umetrics.simca.waveletfunctiontype.biorthogonal6, 'coiflet': umetrics.simca.waveletfunctiontype.coiflet, 'daubechies': umetrics.simca.waveletfunctiontype.daubechies, 'symmlet': umetrics.simca.waveletfunctiontype.symmlet}
- symmlet = umetrics.simca.waveletfunctiontype.symmlet
- values = {0: umetrics.simca.waveletfunctiontype.beylkin, 1: umetrics.simca.waveletfunctiontype.coiflet, 2: umetrics.simca.waveletfunctiontype.daubechies, 3: umetrics.simca.waveletfunctiontype.symmlet, 4: umetrics.simca.waveletfunctiontype.biorthogonal1, 5: umetrics.simca.waveletfunctiontype.biorthogonal2, 6: umetrics.simca.waveletfunctiontype.biorthogonal3, 7: umetrics.simca.waveletfunctiontype.biorthogonal4, 8: umetrics.simca.waveletfunctiontype.biorthogonal5, 9: umetrics.simca.waveletfunctiontype.biorthogonal6}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
|
class ytreatment(Boost.Python.enum) |
|
Different Y variable treatments for batch project
noprocessing --- No preprocessing, use original values.
smooth --- Smooth the original values.
smoothShift --- Smooth and shift values so all phases and batches starts at 0.
smoothShiftNormalize --- Smooth, normalize and shift values so all phases and batches starts at 0.
smoothNormalize --- Smooth and normalize the original values.
shift --- Shift values so all phases and batches starts at 0.
shiftNormalize --- Normalize and shift values so all phases and batches starts at 0.
normalize --- Normalize values.
nobatchY --- Not a batch Y, create non batch models if this is the maturity.
default --- The treatment is decided from the variable. |
|
- Method resolution order:
- ytreatment
- Boost.Python.enum
- builtins.int
- builtins.object
Data and other attributes defined here:
- default = umetrics.simca.ytreatment.default
- names = {'default': umetrics.simca.ytreatment.default, 'nobatchY': umetrics.simca.ytreatment.nobatchY, 'noprocessing': umetrics.simca.ytreatment.noprocessing, 'normalize': umetrics.simca.ytreatment.normalize, 'shift': umetrics.simca.ytreatment.shift, 'shiftNormalize': umetrics.simca.ytreatment.shiftNormalize, 'smooth': umetrics.simca.ytreatment.smooth, 'smoothNormalize': umetrics.simca.ytreatment.smoothNormalize, 'smoothShift': umetrics.simca.ytreatment.smoothShift, 'smoothShiftNormalize': umetrics.simca.ytreatment.smoothShiftNormalize}
- nobatchY = umetrics.simca.ytreatment.nobatchY
- noprocessing = umetrics.simca.ytreatment.noprocessing
- normalize = umetrics.simca.ytreatment.normalize
- shift = umetrics.simca.ytreatment.shift
- shiftNormalize = umetrics.simca.ytreatment.shiftNormalize
- smooth = umetrics.simca.ytreatment.smooth
- smoothNormalize = umetrics.simca.ytreatment.smoothNormalize
- smoothShift = umetrics.simca.ytreatment.smoothShift
- smoothShiftNormalize = umetrics.simca.ytreatment.smoothShiftNormalize
- values = {0: umetrics.simca.ytreatment.noprocessing, 1: umetrics.simca.ytreatment.smooth, 2: umetrics.simca.ytreatment.smoothShift, 3: umetrics.simca.ytreatment.smoothShiftNormalize, 4: umetrics.simca.ytreatment.smoothNormalize, 5: umetrics.simca.ytreatment.shift, 6: umetrics.simca.ytreatment.shiftNormalize, 7: umetrics.simca.ytreatment.normalize, 8: umetrics.simca.ytreatment.nobatchY, 9: umetrics.simca.ytreatment.default}
Methods inherited from Boost.Python.enum:
- __repr__(self, /)
- Return repr(self).
- __str__(self, /)
- Return str(self).
Data descriptors inherited from Boost.Python.enum:
- name
Methods inherited from builtins.int:
- __abs__(self, /)
- abs(self)
- __add__(self, value, /)
- Return self+value.
- __and__(self, value, /)
- Return self&value.
- __bool__(self, /)
- self != 0
- __ceil__(...)
- Ceiling of an Integral returns itself.
- __divmod__(self, value, /)
- Return divmod(self, value).
- __eq__(self, value, /)
- Return self==value.
- __float__(self, /)
- float(self)
- __floor__(...)
- Flooring an Integral returns itself.
- __floordiv__(self, value, /)
- Return self//value.
- __format__(...)
- default object formatter
- __ge__(self, value, /)
- Return self>=value.
- __getattribute__(self, name, /)
- Return getattr(self, name).
- __getnewargs__(...)
- __gt__(self, value, /)
- Return self>value.
- __hash__(self, /)
- Return hash(self).
- __index__(self, /)
- Return self converted to an integer, if self is suitable for use as an index into a list.
- __int__(self, /)
- int(self)
- __invert__(self, /)
- ~self
- __le__(self, value, /)
- Return self<=value.
- __lshift__(self, value, /)
- Return self<<value.
- __lt__(self, value, /)
- Return self<value.
- __mod__(self, value, /)
- Return self%value.
- __mul__(self, value, /)
- Return self*value.
- __ne__(self, value, /)
- Return self!=value.
- __neg__(self, /)
- -self
- __new__(*args, **kwargs) from builtins.type
- Create and return a new object. See help(type) for accurate signature.
- __or__(self, value, /)
- Return self|value.
- __pos__(self, /)
- +self
- __pow__(self, value, mod=None, /)
- Return pow(self, value, mod).
- __radd__(self, value, /)
- Return value+self.
- __rand__(self, value, /)
- Return value&self.
- __rdivmod__(self, value, /)
- Return divmod(value, self).
- __rfloordiv__(self, value, /)
- Return value//self.
- __rlshift__(self, value, /)
- Return value<<self.
- __rmod__(self, value, /)
- Return value%self.
- __rmul__(self, value, /)
- Return value*self.
- __ror__(self, value, /)
- Return value|self.
- __round__(...)
- Rounding an Integral returns itself.
Rounding with an ndigits argument also returns an integer.
- __rpow__(self, value, mod=None, /)
- Return pow(value, self, mod).
- __rrshift__(self, value, /)
- Return value>>self.
- __rshift__(self, value, /)
- Return self>>value.
- __rsub__(self, value, /)
- Return value-self.
- __rtruediv__(self, value, /)
- Return value/self.
- __rxor__(self, value, /)
- Return value^self.
- __sizeof__(...)
- Returns size in memory, in bytes
- __sub__(self, value, /)
- Return self-value.
- __truediv__(self, value, /)
- Return self/value.
- __trunc__(...)
- Truncating an Integral returns itself.
- __xor__(self, value, /)
- Return self^value.
- bit_length(...)
- int.bit_length() -> int
Number of bits necessary to represent self in binary.
>>> bin(37)
'0b100101'
>>> (37).bit_length()
6
- conjugate(...)
- Returns self, the complex conjugate of any int.
- from_bytes(...) from builtins.type
- int.from_bytes(bytes, byteorder, *, signed=False) -> int
Return the integer represented by the given array of bytes.
The bytes argument must be a bytes-like object (e.g. bytes or bytearray).
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument indicates whether two's complement is
used to represent the integer.
- to_bytes(...)
- int.to_bytes(length, byteorder, *, signed=False) -> bytes
Return an array of bytes representing an integer.
The integer is represented using length bytes. An OverflowError is
raised if the integer is not representable with the given number of
bytes.
The byteorder argument determines the byte order used to represent the
integer. If byteorder is 'big', the most significant byte is at the
beginning of the byte array. If byteorder is 'little', the most
significant byte is at the end of the byte array. To request the native
byte order of the host system, use `sys.byteorder' as the byte order value.
The signed keyword-only argument determines whether two's complement is
used to represent the integer. If signed is False and a negative integer
is given, an OverflowError is raised.
Data descriptors inherited from builtins.int:
- denominator
- the denominator of a rational number in lowest terms
- imag
- the imaginary part of a complex number
- numerator
- the numerator of a rational number in lowest terms
- real
- the real part of a complex number
| |