We are pleased to announce that the Runners Autoscaler feature from Bitbucket Pipelines has transitioned from beta to general availability. This feature is designed to run on Kubernetes, delivering an enhanced user experience with increased flexibility and control.
The primary function of Runners Autoscaler is to efficiently scale runners up and down based on the number of active builds in Bitbucket Pipelines, optimising runner utilisation. This feature allows your team to maintain high productivity levels by ensuring that workloads are balanced and resources are not being wasted.
When to use this tool?
Use this tool to set up and scale Bitbucket self-hosted runners on your own infrastructure.
For example, let’s say you have a group called finance
that needs its own runners, a minimum of 10
runners available all the time and be able to scale up to 50
.
This is an example of how you would configure the config map file runners_config.yaml
:
constants:
default_sleep_time_runner_setup: 10
default_sleep_time_runner_delete: 5
runner_api_polling_interval: 600
runner_cool_down_period: 300
groups:
- name: "Runner group 1"
workspace: "<workspace_uuid>"
labels:
- "finance"
namespace: "default"
strategy: "percentageRunnersIdle"
parameters:
min: 10
max: 50
scale_up_threshold: 0.5
scale_down_threshold: 0.2
scale_up_multiplier: 1.5
scale_down_multiplier: 0.5
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "2Gi"
cpu: "1000m"
What problems does it solve?
This tool allows you to:
- avoid manually setting up runners in the Bitbucket UI.
- setup multiple runners at once.
- use the provided functionality for file-based configuration, i.e., you provide settings in the config file.
- autoscale runners according to the current workload in the build.
Documentation
We’ve also compiled a comprehensive guide to help you understand and leverage the Runners Autoscaler. Visit our documentation to learn more.
For developers interested in diving into the code, it is available at https://bitbucket.org/bitbucketpipelines/runners-autoscaler.