triton.autotune¶
- triton.autotune(configs, key, prune_configs_by=None, reset_to_zero=None)¶
Decorator for auto-tuning a
triton.jit
’d function.@triton.autotune(configs=[ triton.Config(meta={'BLOCK_SIZE': 128}, num_warps=4), triton.Config(meta={'BLOCK_SIZE': 1024}, num_warps=8), ], key=['x_size'] # the two above configs will be evaluated anytime # the value of x_size changes ) @triton.jit def kernel(x_ptr, x_size, **META): BLOCK_SIZE = META['BLOCK_SIZE']
- Note:
When all the configurations are evaluated, the kernel will run multiple times. This means that whatever value the kernel updates will be updated multiple times. To avoid this undesired behavior, you can use the reset_to_zero argument, which resets the value of the provided tensor to zero before running any configuration.
- Parameters:
configs (list[triton.Config]) – a list of
triton.Config
objectskey (list[str]) – a list of argument names whose change in value will trigger the evaluation of all provided configs.
prune_configs_by – a dict of functions that are used to prune configs, fields: ‘perf_model’: performance model used to predicate running time with different configs, returns running time ‘top_k’: number of configs to bench ‘early_config_prune’(optional): a function used to do early prune (eg, num_stages). It takes configs:List[Config] as its input, and returns pruned configs.
reset_to_zero (list[str]) – a list of argument names whose value will be reset to zero before evaluating any configs.