MySQL Slow Query Log: Finding Database Bottlenecks in PrestaShop
Why Database Performance Matters in PrestaShop
PrestaShop is a database-heavy application. Every product page, category listing, search result, cart update, and checkout step involves multiple database queries. A typical product page can generate 50 to 200 or more SQL queries depending on the number of modules installed, the complexity of the product (combinations, features, attachments), and the theme. When any of these queries run slowly, the entire page slows down, and the effect compounds under load.
The challenge is identifying which queries are actually slow. With hundreds of queries per page load, you cannot simply guess. You need data. The MySQL slow query log is the most direct and reliable tool for collecting this data. It records every query that exceeds a time threshold you define, giving you a clear picture of where your database spends the most time.
This guide covers how to enable and configure the slow query log, how to analyze the results, how to interpret query execution plans, and how to apply the most common optimizations for PrestaShop databases.
Enabling the Slow Query Log
The slow query log is a MySQL feature that writes queries exceeding a specified execution time to a log file. It is disabled by default on most installations because it adds a small amount of I/O overhead, but the performance cost is negligible compared to the diagnostic value it provides.
Configuration via my.cnf
To enable the slow query log permanently, add the following lines to your MySQL configuration file. On most Linux systems, this file is located at /etc/mysql/my.cnf, /etc/my.cnf, or in a directory like /etc/mysql/conf.d/:
slow_query_log = 1 enables the feature.
slow_query_log_file = /var/log/mysql/slow-query.log specifies where the log is written. Make sure the MySQL process has write permissions to this directory.
long_query_time = 1 sets the threshold in seconds. Any query that takes longer than this value is logged. Start with 1 second to catch the most egregious offenders, then lower it to 0.5 or even 0.1 seconds as you optimize the worst queries and want to find more subtle bottlenecks.
log_queries_not_using_indexes = 1 logs queries that do not use any index, regardless of how long they take. This is extremely useful for PrestaShop because many performance problems are caused by full table scans on large tables. However, this can generate a lot of log entries on a busy store, so you may want to enable it temporarily during analysis and disable it afterward.
After editing the configuration file, restart MySQL for the changes to take effect.
Enabling at Runtime
You can also enable the slow query log without restarting MySQL by running SQL commands. Connect to MySQL as root and execute:
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;
SET GLOBAL log_queries_not_using_indexes = 1;
Runtime changes take effect immediately but do not persist across MySQL restarts. This approach is useful for temporary analysis sessions where you want to collect data for a specific period and then disable logging.
Choosing the Right Threshold
The long_query_time value determines what gets logged. Setting it too high means you miss moderately slow queries that collectively impact performance. Setting it too low floods the log with entries that are not individually problematic.
For an initial analysis, start at 1 second. This catches queries that are clearly too slow. After optimizing those, lower the threshold to 0.5 seconds, then 0.2 seconds. On a well-optimized PrestaShop database, the goal is for no query to take longer than 0.1 seconds, but reaching that level requires significant optimization work.
Keep in mind that query execution time varies with server load. A query that takes 0.3 seconds under normal load might take 2 seconds during a traffic spike because of CPU contention, disk I/O bottlenecks, or lock contention. The slow query log captures actual execution times, so analyzing logs from peak traffic periods gives you the most realistic picture.
Analyzing the Slow Query Log
The raw slow query log is a text file with entries that look like this:
# Time: 2024-03-15T14:22:33.456789Z# User@Host: prestashop[prestashop] @ localhost []# Query_time: 3.456123 Lock_time: 0.000234 Rows_sent: 1 Rows_examined: 847293SET timestamp=1710511353;SELECT * FROM ps_product WHERE active = 1 AND id_product NOT IN (SELECT id_product FROM ps_category_product WHERE id_category = 2);
The key fields are Query_time (how long the query took), Lock_time (how long it waited for a lock), Rows_sent (how many rows were returned), and Rows_examined (how many rows MySQL had to look at to find the result). A query that examines 847,293 rows to return 1 row is a clear sign of a missing index or inefficient query structure.
Using mysqldumpslow
Reading the raw log file is impractical for busy stores that generate thousands of slow query entries. The mysqldumpslow tool, included with MySQL, aggregates and summarizes slow query log entries. It groups identical queries together (abstracting away specific values) and sorts them by various criteria.
To find the 10 slowest queries by average time: mysqldumpslow -s at -t 10 /var/log/mysql/slow-query.log
To find the queries with the most total execution time: mysqldumpslow -s t -t 10 /var/log/mysql/slow-query.log
To find the queries that examined the most rows: mysqldumpslow -s r -t 10 /var/log/mysql/slow-query.log
The -s flag specifies the sort order: at for average time, t for total time, c for count (how many times the query appeared), r for rows examined. The -t flag limits the output to the top N queries.
The most useful sort for initial analysis is by total time (-s t), which shows you which queries consume the most database time overall. A query that takes 0.5 seconds but runs 1000 times per hour consumes more total time than a query that takes 5 seconds but runs once per hour.
Using pt-query-digest
For more detailed analysis, Percona Toolkit's pt-query-digest is the industry standard tool. It provides far more detailed statistics than mysqldumpslow, including percentile distributions of query times, variance analysis, and table-level statistics.
Basic usage: pt-query-digest /var/log/mysql/slow-query.log
The output starts with a profile section that ranks queries by total time, similar to mysqldumpslow but with more detail. Each query then gets a detailed section showing the minimum, maximum, mean, median, and 95th percentile execution times, plus the distribution of rows examined and rows sent.
The 95th percentile is particularly important for PrestaShop performance. It tells you the execution time that 95% of executions fall below. If the average is 0.3 seconds but the 95th percentile is 2.5 seconds, you have a consistency problem: most of the time the query is acceptable, but 5% of users experience a much slower response.
You can install Percona Toolkit on Debian and Ubuntu with apt install percona-toolkit or download it from the Percona website. It is worth installing on any server where you run PrestaShop.
Common Slow Queries in PrestaShop
Certain query patterns appear repeatedly in PrestaShop slow query logs. Knowing these patterns helps you diagnose issues faster.
Full Table Scans on ps_product
Queries against the ps_product table without proper index usage are among the most common slow queries. As your catalog grows beyond a few thousand products, any query that scans the entire product table becomes problematic. Look for queries with WHERE clauses on columns that are not indexed, or queries that join ps_product with ps_product_lang and ps_product_shop without using the primary keys efficiently.
Category Product Listings with Many Filters
When customers use layered navigation (faceted search) to filter products by attributes, features, or price ranges, PrestaShop generates complex queries that join multiple tables. The ps_layered_* tables used by the faceted search module can become performance bottlenecks if indexes are missing or if the indexing process has not run recently.
Search Queries
PrestaShop's built-in search uses the ps_search_word and ps_search_index tables. On stores with large catalogs and many search terms, these tables grow large and queries against them slow down. The search query typically involves a full-text operation or multiple LIKE conditions, both of which are inherently slower than index lookups.
Cart and Order Queries
Queries that aggregate cart or order data can be slow on stores with a long history. If your ps_cart table has millions of rows (which is common because PrestaShop creates a new cart for almost every visitor), queries that scan this table become slow. The same applies to ps_orders and ps_order_detail on high-volume stores.
Statistics and Reporting Queries
Back office statistics modules often run aggregate queries (SUM, COUNT, GROUP BY) across large tables like ps_orders, ps_connections, and ps_page_viewed. These queries can be extremely slow because they scan large datasets. On stores that have been running for years, these tables may contain millions of rows, and statistics queries that worked fine on a small dataset now take minutes.
Module-Generated Queries
Third-party modules frequently generate inefficient queries because module developers often test against small datasets. A module that works perfectly with 100 products may generate catastrophically slow queries with 10,000 products. The slow query log helps you identify which modules are responsible because the query text often includes table names or patterns that point to specific modules.
Using EXPLAIN to Analyze Queries
Once you have identified slow queries from the log, the next step is understanding why they are slow. The EXPLAIN statement shows you how MySQL plans to execute a query, including which indexes it uses, how many rows it expects to examine, and what join strategies it employs.
Reading EXPLAIN Output
Run EXPLAIN followed by the slow query. The output shows one row per table in the query, with these important columns:
type: How MySQL accesses the table. Values from best to worst: system/const (single row, essentially free), eq_ref (one row per join, using unique index), ref (multiple rows, using non-unique index), range (index range scan), index (full index scan), ALL (full table scan). If you see ALL on a table with more than a few thousand rows, that is almost certainly your bottleneck.
key: Which index MySQL actually chose for this table. If this is NULL, no index is being used, and MySQL is scanning the entire table.
rows: The estimated number of rows MySQL needs to examine. This is an estimate, not exact, but it gives you a sense of scale. If the estimated rows value is close to the total number of rows in the table, you have a full table scan.
Extra: Additional information about the execution plan. Watch for Using filesort (MySQL must sort results without an index, which is slow for large datasets), Using temporary (MySQL creates a temporary table, often for GROUP BY or DISTINCT operations), and Using where (MySQL is filtering rows after reading them, which means the index does not fully cover the WHERE clause).
EXPLAIN Example
Consider a slow query: SELECT p.id_product, pl.name FROM ps_product p LEFT JOIN ps_product_lang pl ON p.id_product = pl.id_product WHERE pl.id_lang = 1 AND p.active = 1 AND p.price > 100 ORDER BY p.date_add DESC LIMIT 20
Running EXPLAIN on this query might show that the ps_product table is accessed with type ALL (full table scan), no key is used, and the Extra column shows Using where; Using filesort. This tells you that MySQL is reading every row in the product table, filtering by active status and price, and then sorting the results by date. On a table with 50,000 products, this involves reading and sorting thousands of rows to return just 20.
The fix would be creating a composite index on (active, price, date_add) or restructuring the query to take better advantage of existing indexes.
Index Optimization for PrestaShop
Adding the right indexes is the most effective way to speed up slow queries. An index allows MySQL to find rows without scanning the entire table, similar to how a book's index lets you find a topic without reading every page.
When to Add an Index
Add an index when EXPLAIN shows a full table scan (type ALL) on a table with many rows, when a query runs frequently and consistently appears in the slow query log, and when the query's WHERE clause, JOIN condition, or ORDER BY clause references columns that are not currently indexed.
Do not add indexes blindly. Every index speeds up reads but slows down writes (INSERT, UPDATE, DELETE) because MySQL must update the index on every data modification. PrestaShop performs many writes (cart updates, order creation, statistics tracking), so excessive indexing creates its own performance problems.
Composite Indexes
For PrestaShop queries, composite (multi-column) indexes are often more effective than single-column indexes. A composite index on (id_shop, id_lang, active) lets MySQL efficiently handle queries that filter by all three columns. The order of columns in the index matters: MySQL uses the index from left to right, so the most selective column (the one that filters out the most rows) should generally come first.
PrestaShop's multishop and multilanguage architecture means many queries include id_shop and id_lang conditions. These columns appear in virtually every query against product, category, and CMS tables. If you are adding custom indexes, including these columns is often necessary for the index to be useful.
Covering Indexes
A covering index contains all the columns that a query needs, so MySQL can satisfy the entire query from the index without reading the actual table data. This is shown in EXPLAIN as Using index in the Extra column. Covering indexes provide the best possible performance because reading from an index is faster than reading from the table itself (indexes are smaller and more likely to fit in memory).
Common Index Additions for PrestaShop
Several indexes that are not present in a default PrestaShop installation can significantly improve performance on large stores. These include indexes on the date_add column in ps_cart and ps_orders for queries that filter or sort by date, composite indexes on ps_product_attribute for combination-heavy queries, and indexes on custom columns added by modules that run frequent queries against them.
Before adding any index, verify the improvement by running the slow query with and without the index. Use EXPLAIN to confirm that MySQL actually uses the new index. An unused index wastes disk space and slows down writes without providing any benefit.
Connection Management and Query Optimization
Connection Pooling
PrestaShop uses a single database connection per request by default. Each PHP process opens a connection to MySQL, executes its queries, and closes the connection when the request completes. On busy stores with many concurrent visitors, this creates a high rate of connection creation and teardown, which has overhead.
MySQL's max_connections setting limits how many simultaneous connections are allowed. If your store runs out of connections, visitors see "Too many connections" errors. The default value is often 151, which may be insufficient for high-traffic stores running many PHP-FPM workers.
To determine the right value, check the Max_used_connections status variable, which tells you the peak number of simultaneous connections since MySQL started. Set max_connections to at least 20% above this peak to provide headroom during traffic spikes.
Query Optimization Techniques
Beyond indexing, several query-level optimizations can improve PrestaShop database performance:
Avoid SELECT *: Queries that select all columns transfer more data than necessary between MySQL and PHP. PrestaShop core sometimes uses SELECT * for convenience, but custom queries should specify only the columns needed.
Limit subqueries: MySQL's optimizer handles subqueries less efficiently than JOINs in many cases. If you see slow queries with IN (SELECT ...) patterns, rewriting them as JOINs often improves performance. This applies particularly to queries generated by modules.
Use LIMIT wisely: For paginated listings, PrestaShop typically uses LIMIT offset, count. For large offsets (like page 500 of a product listing), this becomes slow because MySQL must read and discard all rows up to the offset. A more efficient approach is keyset pagination, where you filter by the last seen ID instead of using an offset.
Batch operations: Modules that process products in loops often execute one query per product. Rewriting these as batch queries (using IN clauses or CASE statements for updates) dramatically reduces the number of round trips to the database.
Monitoring and Ongoing Optimization
Database optimization is not a one-time task. As your catalog grows, traffic patterns change, and you install new modules, new slow queries emerge. Establish a routine for monitoring database performance.
Enable the slow query log permanently with a reasonable threshold (0.5 to 1 second). Review the log weekly or monthly using pt-query-digest. Pay attention to new queries that appear in the log and to existing queries whose execution time increases over time.
Monitor MySQL's key performance metrics: the buffer pool hit rate (should be above 99%), the number of slow queries per hour, the average query time, and the connection usage. These metrics give you early warning of performance degradation before it impacts users.
When you add or update modules, check whether they introduce new slow queries. Run the module's functionality while monitoring the slow query log to catch problems before they affect production traffic. A module that generates efficient queries on a test store with 50 products may create severe bottlenecks on a production store with 50,000 products. Testing with production-scale data is the only reliable way to verify module performance.
Summary
The MySQL slow query log is your most valuable tool for finding and fixing database bottlenecks in PrestaShop. Enable it, set an appropriate threshold, and use analysis tools like mysqldumpslow or pt-query-digest to identify the worst offenders. Use EXPLAIN to understand why specific queries are slow, and apply targeted indexes to eliminate full table scans. Monitor your database performance continuously, because optimization is an ongoing process as your store grows. The combination of slow query log analysis, EXPLAIN-driven optimization, and proper indexing can transform a sluggish PrestaShop store into one that handles large catalogs and high traffic with responsive page loads.
Questa risposta ti è stata utile?
Hai ancora domande?
Can't find what you're looking for? Send us your question and we'll get back to you quickly.