Skip to main content

Performance Tips

MariaDB

The InnoDB buffer pool serves as a cache for data and indexes. It is a key component for optimizing MariaDB performance. Its size should be as large as possible to keep frequently used data in memory and reduce disk I/O - typically the biggest bottleneck.

By default, the buffer pool size is between 128 MB and 512 MB, depending on which configuration example you use. You can change it with the --innodb-buffer-pool-size command parameter in the mariadb: section of your docker-compose.yml. M stands for Megabyte, G for Gigabyte. Do not use spaces.

If your server has plenty of physical memory, we recommend increasing the size to 1 or 2 GB:

services:
mariadb:
command: mysqld --innodb-buffer-pool-size=1G ...

As a rule of thumb, Innodb_buffer_pool_pages_free should never be less than 5% of the total pages. You can run the following SQL statement, for example using the mariadb command in a terminal, to display the number of free pages and other InnoDB-related status information:

SHOW GLOBAL STATUS LIKE 'Innodb_buffer%';

Advanced users may adjust additional parameters to further improve performance. Tools such as the mysqltuner.pl script can provide helpful recommendations for this.

!!! info "Windows and macOS" If you are using Docker Desktop on Windows or macOS, remember to increase the total memory available for Docker services. Otherwise, they may run out of resources and cannot benefit from a larger cache size. In case Starsky and MariaDB are running in a virtual machine, its memory size should be increased as well. Restart for changes to take effect.

Windows

Solve Windows-Specific Issues ›

Storage

Local Solid-State Drives (SSDs) are best for databases of any kind:

  • database performance extremely benefits from high throughput which HDDs can't provide
  • SSDs have more predictable performance and can handle more concurrent requests
  • due to the HDD seek time, HDDs only support 5% of the reads per second of SSDs
  • the cost savings from using slow hard disks are minimal

Switching to SSDs makes a big difference, especially for write operations and when the read cache is not big enough or can't be used.

note
Never store database files on an unreliable device such as a USB flash drive, SD card, or shared network folder. These may also have unexpected file size limitations, which is especially problematic for databases that do not split data into smaller files.

Memory

Indexing large photo and video collections benefits from plenty of memory for caching and processing large media files. Ideally, the amount of RAM should match the number of physical CPU cores. If not, reduce the number of workers as explained below.

Server CPU

Last but not least, performance can be limited by your server CPU. If you've tried everything else, then only moving your instance to a more powerful device or cloud server may help.

Be aware that most NAS devices are optimized for minimal power consumption and low production costs. Although their hardware gets faster with each generation, benchmarks show that even 8-year-old standard desktop CPUs like the Intel Core i3-4130 are often many times faster:

CPU Benchmark

Legacy Hardware

It is a known issue that the user interface and backend operations, especially face recognition, can be slow or even crash on older hardware due to a lack of resources. Like most applications, Starsky has certain requirements and our development process does not include testing on unsupported or unusual hardware.

In many cases, performance can be improved through optimizations. Since these can prove to be very time-consuming and cost-intensive in practice, users and developers must decide on a case-by-case basis whether this provides sufficient benefit in relation to the costs or whether the use of more powerful hardware is faster and cheaper overall.

We kindly ask you not to open a problem report on GitHub Issues for poor performance on older hardware until a full cause and feasibility analysis has been performed. GitHub Discussions or any of our other public forums and communities are great places to start a discussion.

That being said, one of the advantages of open-source software is that users can submit pull requests with performance and other enhancements they would like to see implemented. This will result in a much faster solution than waiting for a core team member to remotely analyze your problem and then provide a fix.

Troubleshooting

If your server runs out of memory, the index is frequently locked, or other system resources are running low:

  • Try reducing the number of workers by setting app__maxDegreesOfParallelism to a reasonably small value in docker-compose.yml, depending on the CPU performance and number of cores
  • Make sure your server has at least 4 GB of swap space so that indexing doesn't cause restarts when memory usage spikes; RAW image conversion and video transcoding are especially demanding
  • If you are using SQLite, switch to MariaDB, which is better optimized for high concurrency

Other issues? Our troubleshooting checklists help you quickly diagnose and solve them.

*[SQLite]: self-contained, serverless SQL database