Skip to main content

Machine learning models in Layer

Machine learning models are akin to mathematical functions. They take a request in the form of input data, make a prediction on that input data, and then serve a response. Layer takes a declarative approach to streamline the ML Model development process.

ML models are first-class entities in Layer. They are integral to and built within a Layer Project. They are versioned and stored in the Discover > Models tab.

Models in Layer

All models are defined and configured in a model YAML file that references one or more Python files. It is usually named model.yaml. A basic sample layout might look as follows:

├── my_model/
│ ├── model.yaml
│ ├──

Models in Layer are stored in the Models tab. For more information about how to get them there, refer to Add models.

Configure models

Models are configured in a model YAML file. An example is shown below.

As with other Layer entities, you can give the model YAML file any name you want. Layer knows the file is configuring a model is via the type: model field within the file itself.

For more information, refer to Model YAML.

Train models

Model training is done via Python files referenced from the model YAML file.

This Python code uses the Layer SDK to define a train_model function that takes a train argument as well as a series of feature set arguments. You can train your model any way you'd like within this function. While training, you can use train.log_parameter and train.log_metric to save parameters and metrics of your training runs that will then be viewable in the Layer Models tab. You can also use train.register_input and train.register_output to define the model signature, which can then be used for determining the data lineage of this model.

For more information about the train_model function, refer to train_model.


Layer does not currently support XGBoost on scikit-learn.