Skip to content

Training

__init__(training_name, from_config=False, model=None, dataset=None, from_preset='adaptive', pytest=False, verbose=False, **kwargs_optional)

Parameters:

Name Type Description Default
training_name str

e.g. 'my_training'

required
from_config bool

True => read parameters from config file (static definition). False => use Training arguments (dynamic definition)

False
model Optional[str]

[equivalent to model_name] e.g. 'bert-base-cased'

None
dataset Optional[str]

[equivalent to dataset_name] e.g. 'conll2003'

None
from_preset Optional[str]

True => use parameters from preset

'adaptive'
pytest bool

only for testing, don't specify

False
verbose bool

True => verbose output

False
**kwargs_optional Any

parameters

{}

get_result(metric='f1', level='entity', label='micro', phase='test', average=True)

Parameters:

Name Type Description Default
metric str

"f1", "precision", "recall"

'f1'
level str

"entity" or "token"

'entity'
label str

"micro", "macro", "PER", ..

'micro'
phase str

"val" or "test"

'test'
average bool

if True, return average result of all runs. if False, return result of best run.

True

Returns:

Name Type Description
result Optional[str]

e.g. "0.9011 +- 0.0023" (average = True) or "0.9045" (average = False)

run()

run a single training.

Note:

  • from_config == True -> training config file is used, no other optional arguments will be used

  • from_config == False -> training config file is created dynamically, optional arguments will be used

    • model and dataset are mandatory.

    • All other arguments relate to hyperparameters and are optional. They are determined using the following hierarchy:

      1) optional argument

      2) from_preset (adaptive, original, stable), which specifies e.g. the hyperparameters "max_epochs", "early_stopping", "lr_schedule"

      3) default training configuration

show_config()

print training config