Configuring Lexer

Configuration Management

Why the Configurations?

We provide configurable features so you can tailor Lexer to your specific modeling and benchmarking needs.

Three Configuration Methods

There are many ways to configure Lexer.

Here's how we process configuration flags, in order of precedence:

1. Lexer APIs

Explicitly provide arguments to the @Lexer decorator and APIs like benchmark(torch_model, input, ...).

2. User config file (.lexer_config.toml)

This is a configuration file that contains values the user would like to overwrite over the provided defaults or undefined in Lexer APIs.

The Lexer user config file must be named .lexer_config.toml and the LEXER_CONFIG_PATH environment variable be defined if it's not the $HOME directory.


Setting a custom LEXER_CONFIG_PATH

For example, you may set your LEXER_CONFIG_PATH like this:

$ ...

If you're unfamiliar with TOML, you can learn more at the official TOML website.

3. Lexer's default config file

We will default to the Lexer variables in our built-in config file.

The user defined .lexer_config.toml File

The .lexer_config.toml is an alternative method that allows you to configure these settings without resorting to a bunch of passing a bunch of in-line API or decorator arguments.

This is particularly useful if you're planning to use Lexer to benchmark more than one model (or even nn.Module at once with similar export or benchmark configurations.

Benchmark Configurations

  • enable_onnxruntime: Boolean flag to enable ONNXRuntime benchmarks
  • enable_pytorch: Boolean flag to enable PyTorch benchmarks
  • num_iterations: number of iterations (in integers) to make during benchmarking

Output Configurations

  • enable_csv: Boolean flag to indicate if a CSV should be exported
  • enable_stdout: Boolean flag to enable printing to console stdout

Here's a sample .lexer_config.toml:

# ---------------------------- LEXER CONFIGS -------------------------------

# --------------------------------- EXPORT ---------------------------------
batch_size = 1
enable_export_validation = true

# ----------------------------- BENCHMARKS ---------------------------------
enable_onnxruntime = true
enable_pytorch = true
num_iterations = 1000000

# ------------------------------- OUTPUT ------------------------------------
# OUTPUTS - Export + benchmark output formats - stdout, csv etc.
enable_csv = true
enable_stdout = true