LPM¶
- class LPM(embedding_dim, optimizer_name, learning_rate, learning_rate_decay, num_layers, hidden_dim, batch_size, embedding_aggregation_mode='mean', dropout=0.0, num_workers=0, pin_memory=True, early_stopping_patience=0, profiler=False, lightning_trainer_pars=None)[source]¶
Bases:
ModelMixin
,LightningModule
Large perturbation model.
- Parameters:
embedding_dim (
int
) – Dimensionality of all embedding layers.optimizer_name (
str
) – Name of pytorch optimizer to use.learning_rate (
float
) – Learning rate.learning_rate_decay (
float
) – Exponential learning rate decay.num_layers (
int
) – Depth of the MLP.hidden_dim (
int
) – Number of units in the hidden nodes.batch_size (
int
) – Size of batches during training.embedding_aggregation_mode (
Literal
['sum'
,'mean'
,'max'
]) – Defines how to aggregate embeddings.num_workers (
int
) – Number of workers to use during data loading.pin_memory (
bool
) – Whether to pin the memory.early_stopping_patience (
int
) – Patience for early stopping in case validation set is given.lightning_trainer_pars (
Optional
[dict
]) – Parameters for pytorch-lightning.
- configure_callbacks()[source]¶
Configure model-specific callbacks. When the model gets attached, e.g., when
.fit()
or.test()
gets called, the list or a callback returned here will be merged with the list of callbacks passed to the Trainer’scallbacks
argument. If a callback returned here has the same type as one or several callbacks already present in the Trainer’s callbacks list, it will take priority and replace them. In addition, Lightning will make sureModelCheckpoint
callbacks run last.- Returns:
A callback or a list of callbacks which will extend the list of callbacks in the Trainer.
Example:
def configure_callbacks(self): early_stop = EarlyStopping(monitor="val_acc", mode="max") checkpoint = ModelCheckpoint(monitor="val_loss") return [early_stop, checkpoint]
- load_state_dict(state_dict, strict=True, assign=False)[source]¶
Copy parameters and buffers from
state_dict
into this module and its descendants.If
strict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.Warning
If
assign
isTrue
the optimizer must be created after the call toload_state_dict
unlessget_swap_module_params_on_conversion()
isTrue
.- Parameters:
state_dict (dict) – a dict containing parameters and persistent buffers.
strict (bool, optional) – whether to strictly enforce that the keys in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
assign (bool, optional) – When set to
False
, the properties of the tensors in the current module are preserved whereas setting it toTrue
preserves properties of the Tensors in the state dict. The only exception is therequires_grad
field ofDefault: ``False`
- Returns:
- missing_keys is a list of str containing any keys that are expected
by this module but missing from the provided
state_dict
.
- unexpected_keys is a list of str containing the keys that are not
expected by this module but present in the provided
state_dict
.
- Return type:
NamedTuple
withmissing_keys
andunexpected_keys
fields
Note
If a parameter or buffer is registered as
None
and its corresponding key exists instate_dict
,load_state_dict()
will raise aRuntimeError
.
- on_train_epoch_end()[source]¶
Called in the training loop at the very end of the epoch.
To access all batch outputs at the end of the epoch, you can cache step outputs as an attribute of the
LightningModule
and access them in this hook:class MyLightningModule(L.LightningModule): def __init__(self): super().__init__() self.training_step_outputs = [] def training_step(self): loss = ... self.training_step_outputs.append(loss) return loss def on_train_epoch_end(self): # do something with all training_step outputs, for example: epoch_mean = torch.stack(self.training_step_outputs).mean() self.log("training_epoch_mean", epoch_mean) # free up the memory self.training_step_outputs.clear()
- predict(data_x, batch_size=100000)[source]¶
Predict values for the given data.
- Parameters:
data_x (
PlibData
[DataFrame
]) – Data without labels i.e. without the “values” column.batch_size (
Optional
[int
]) – Batch size for prediction. Some models might not support this functionality
- Return type:
ndarray
[Any
,dtype
[TypeVar
(_ScalarType_co
, bound=generic
, covariant=True)]]- Returns:
Value predictions.
- state_dict(*args, **kwargs)[source]¶
Return a dictionary containing references to the whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.Note
The returned object is a shallow copy. It contains references to the module’s parameters and buffers.
Warning
Currently
state_dict()
also accepts positional arguments fordestination
,prefix
andkeep_vars
in order. However, this is being deprecated and keyword arguments will be enforced in future releases.Warning
Please avoid the use of argument
destination
as it is not designed for end-users.- Parameters:
destination (dict, optional) – If provided, the state of module will be updated into the dict and the same object is returned. Otherwise, an
OrderedDict
will be created and returned. Default:None
.prefix (str, optional) – a prefix added to parameter and buffer names to compose the keys in state_dict. Default:
''
.keep_vars (bool, optional) – by default the
Tensor
s returned in the state dict are detached from autograd. If it’s set toTrue
, detaching will not be performed. Default:False
.
- Returns:
a dictionary containing a whole state of the module
- Return type:
dict
Example:
>>> # xdoctest: +SKIP("undefined vars") >>> module.state_dict().keys() ['bias', 'weight']