-
MySQL 9.7 is out and the community wins
May 2026 · 5 min read. Alkin Tezuysal
Cross blog from Oracle https://blogs.oracle.com/mysql/mysql-9-7-is-out-and-the-community-wins
MySQL 9.7 came out on April 21 and I’ve been going through the release notes so you don’t have to. The short version: Oracle has made several previously Enterprise-only features available in the Community Edition; the Hypergraph Optimizer is now free for everyone; and if you’re still on MySQL 8.0, it has reached End-of-Life. Like right now. We’ll get to that.
Let’s go through what matters most.
First: MySQL 8.0 has reached End-of-Life
MySQL 8.0.46 shipped alongside 9.7, and it is the last 8.0 release. As of April 2026, 8.0 is officially End-of-Life. No more security patches. No more bug fixes. Oracle’s release notes now encourage users to upgrade to MySQL 8.4 LTS or 9.7.
If you’re on 8.0 in production, you’ve got two paths. MySQL 8.4 LTS is the safest, most conservative upgrade with a well-trodden migration path from 8.0. MySQL 9.7 is if you want all the new stuff, including everything in this post. Either way, the clock has run out on 8.0. It is strongly recommended to plan your upgrade soon.
Some Enterprise features available in the Community Edition
This is the headline. Oracle has moved five components from Enterprise Edition into Community Edition with this release. For self-hosted MySQL users, this is genuinely good news.
Four of them are replication components that now ship in Community Edition.
Replication Applier Metrics gives you real visibility into how your replica is processing events. Lag monitoring, throughput, the works. This was always the kind of thing you had to implement separately or access the Enterprise Edition.
Group Replication Flow Control Statistics now provides visibility into why your GR cluster is throttling. If you’ve ever stared at a slow cluster and had no idea what was happening under the hood, this one’s for you.
Group Replication Resource Manager lets you control how resources get allocated so replication stops starving your app workloads.
Group Replication Primary Election gives better observability into failover behavior during primary elections, which is exactly when you most need to know what’s going on.
And then there’s the Telemetry component, which is big if you’re running MySQL in a cloud-native setup. Metrics and traces can now flow to Prometheus, OpenTelemetry, and whatever else your observability stack includes. This was Enterprise-only until today.
The Hypergraph Optimizer is in Community Edition now
This is the one that stands out as particularly impactful.
Quick background: MySQL’s traditional query optimizer uses a left-deep tree approach to figure out join order. Works fine for simple queries, but with complex multi-table joins it can miss a lot of better execution plans. The Hypergraph Optimizer takes a completely different approach. It models the whole query as a hypergraph and uses dynamic programming to search a much bigger space of possible plans.
If you’ve got complex reporting queries or anything with a lot of joins, it’s worth turning on and seeing what happens.
To try it:
SET optimizer_switch=’hypergraph_optimizer=on’;
You can set it at session scope (great for testing), globally, persistently across restarts, or hint it on a per-query basis. Start with session scope on your actual slow queries before you go global. The optimizer is solid, but you don’t want surprises in production.
JSON Duality Views are now fully supported in Community
Previously, in Community Edition, you could define JSON Duality Views, but couldn’t do INSERTs, UPDATEs, or DELETEs through them. That was an Enterprise thing. This capability is now available in Community Edition. Full DML is now in Community.
They also added auto-increment support for duality view inserts, which means you can stop manually wiring up primary keys:
INSERT INTO orders_view
VALUES (‘{“customer_id”: 42, “product_id”: 7, “quantity”: 3}’);
The order_id gets generated automatically. Small change, but it removes a real friction point if you’ve been experimenting with duality views.
A few other things worth knowing
Password hashing got stronger.caching_sha2_password now supports PBKDF2 with SHA-512 storage format. Your existing clients don’t need to change anything since this is server-side only, but it makes stored hashes significantly harder to brute-force. Worth noting if you’re in a compliance-heavy environment like PCI-DSS or HIPAA.
Rolling upgrades just got easier. There’s a new variable called replica_allow_higher_version_source that lets a lower-version replica connect to a higher-version primary. Practically, this means you can upgrade your primary first, verify everything looks good, and then roll through your replicas at your own pace instead of taking the whole fleet down at once.
Container users, this one’s for you. MySQL now correctly reads cpuset cgroup limits to figure out how many CPUs are actually available to it. If you’d constrained MySQL to 4 CPUs in Kubernetes, it might have been sizing its thread pools against all 32 host CPUs anyway. That’s fixed.
OpenSSL got bumped to 3.5.5. Not exciting, but good hygiene.
What about Vector search?
A lot of people are asking about this, and the current status is that it is not yet generally available. Oracle is clearly building toward native vector search across the 9.x innovation releases, and you can see pieces of the foundation being put in place. But 9.7 isn’t the release where it lands as something you can use. It’s on the roadmap, it’s coming, the team is looking for feedback, and it is expected in future releases. As some of you know, I’m especially interested in vector features as a maintainer of the MyVector project. We’ve had a couple of calls with the engineering team to collaborate on this subject. This might be an area for community contributions. Please stay tuned.
So, should you upgrade?
Your situationWhat to doStill on MySQL 8.0Upgrade. It’s EOL. Pick 8.4 LTS or 9.7.On 8.4 LTS, happy where you areNothing urgent. Track 9.7 for the LTS landing.Running Group ReplicationTry the new components.Complex JOIN-heavy queriesBenchmark the Hypergraph Optimizer.Running MySQL in KubernetesThe cgroup CPU fix alone might be worth it.Excited about vector searchNot yet. Watch this space.
MySQL 9.7 is available at dev.mysql.com/downloads. If you end up benchmarking the Hypergraph Optimizer or trying the new replication components, share your results. The community learns from real-world numbers a lot more than from release notes. Drop them in the MySQL Community Forums or tag #MySQL97.
-
MySQL 9.7 Is Out and the Community Wins
Alkin Tezuysal Director of Services at Altinity Inc. MySQL 9.7 came out on April 21 and I’ve been going through the release notes so you don’t have to. The short version: Oracle has made several previously Enterprise-only features available in the Community Edition, the Hypergraph Optimizer is now free for everyone, and if you’re still on MySQL 8.0, it […]
-
Why Your Application Should Not Use One MySQL User for Everything
Many applications start with a simple database setup: create one MySQL user, give it access to the application schema, put the credentials in the app config, and move on. That may work at first, but it is not a good long-term security model. A better approach is to use separate MySQL users for separate application […]
-
Curious case of PXC node that refused to start due to SSL
In this blog, I am going to share a real-world debugging case study where a routine Percona XtraDB Cluster node restart led to an unexpected failure. I will walk through what we observed, what we checked, and how we ultimately identified the root cause.
Let’s see how the maintenance goes. It was supposed to be a simple restart. The kind you’ve done a hundred times. You SSH in, run the maintenance, bring the node back up, and go grab a coffee. Except this time, the coffee went cold on the desk… because MySQL refused to start.
The Problem
The error log of Percona XtraDB Cluster (8.0) had the following information:2025-11-05T05:26:10.982984Z 0 [ERROR] [MY-000059] [Server] SSL error: Unable to get certificate from '/var/lib/mysql/server-cert.pem'.
2025-11-05T05:26:10.983030Z 0 [Warning] [MY-013595] [Server] Failed to initialize TLS for channel: mysql_main. See below for the description of exact issue.
2025-11-05T05:26:10.983045Z 0 [Warning] [MY-010069] [Server] Failed to set up SSL because of the following SSL library error: Unable to get certificate
2025-11-05T05:26:10.983052Z 0 [Note] [MY-000000] [WSREP] New joining cluster node configured to use specified SSL artifacts
2025-11-05T05:26:10.983083Z 0 [Note] [MY-000000] [Galera] Loading provider /usr/lib64/galera4/libgalera_smm.so initial position: 07c67757-0d18-11ef-b5a9-ee5d87b39aa8:4147053897
2025-11-05T05:26:10.983098Z 0 [Note] [MY-000000] [Galera] wsrep_load(): loading provider library '/usr/lib64/galera4/libgalera_smm.so'
2025-11-05T05:26:10.983742Z 0 [Note] [MY-000000] [Galera] wsrep_load(): Galera 4.22(f6c0465) by Codership Oy <info@codership.com> (modified by Percona <https://percona.com/>) loaded successfully.
2025-11-05T05:26:10.983771Z 0 [Note] [MY-000000] [Galera] Resolved symbol 'wsrep_node_isolation_mode_set_v1'
2025-11-05T05:26:10.983784Z 0 [Note] [MY-000000] [Galera] Resolved symbol 'wsrep_certify_v1'
2025-11-05T05:26:10.983807Z 0 [Note] [MY-000000] [Galera] CRC-32C: using 64-bit x86 acceleration.
2025-11-05T05:26:10.983995Z 0 [Note] [MY-000000] [Galera] not using SSL compression
2025-11-05T05:26:10.984341Z 0 [ERROR] [MY-000000] [Galera] Bad value '/var/lib/mysql/server-cert.pem' for SSL parameter 'socket.ssl_cert': 336245135: 'error:140AB18F:SSL routines:SSL_CTX_use_certificate:ee key too small'
at /mnt/jenkins/workspace/pxc80-autobuild-RELEASE/test/rpmbuild/BUILD/Percona-XtraDB-Cluster-8.0.42/percona-xtradb-cluster-galera/galerautils/src/gu_asio.cpp:ssl_prepare_context():471
2025-11-05T05:26:10.984401Z 0 [ERROR] [MY-000000] [Galera] Failed to create a new provider '/usr/lib64/galera4/libgalera_smm.so' with options 'gcache.size=1G;gcache.recover=yes;socket.ssl=yes;socket.ssl_ca=/data00/mysqldata/ca.pem;socket.ssl_cert=/data00/mysqldata/server-cert.pem;socket.ssl_key=/data00/mysqldata/server-key.pem;socket.ssl_key=/var/lib/mysql/server-key.pem;socket.ssl_ca=/var/lib/mysql/ca.pem;socket.ssl_cert=/var/lib/mysql/server-cert.pem': Failed to initialize wsrep provider
2025-11-05T05:26:10.984434Z 0 [ERROR] [MY-000000] [WSREP] Failed to load provider
2025-11-05T05:26:10.984448Z 0 [ERROR] [MY-010119] [Server] Aborting
2025-11-05T05:26:10.984602Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.0.42-33.1) Percona XtraDB Cluster (GPL), Release rel33, Revision 6673f8e, WSREP version 26.1.4.3.
2025-11-05T05:26:10.985473Z 0 [ERROR] [MY-010065] [Server] Failed to shutdown components infrastructure.
MySQL was down, and the maintenance clock was running. The certificate file sitting at /var/lib/mysql/server-cert.pem was the same file that had been working perfectly fine before the restart!!
From past history, it was known that the following commands were executed correctly on the same cluster nodeSET GLOBAL ssl_ca = '/var/lib/mysql/ca.pem';
SET GLOBAL ssl_cert = '/var/lib/mysql/server-cert.pem';
SET GLOBAL ssl_key = '/var/lib/mysql/server-key.pem';
ALTER INSTANCE RELOAD TLS;Clients connected over TLS. Galera nodes communicated securely. There were zero complaints from the error log.
In other words, the SSL reload at runtime inherited the process environment that existed when MySQL originally booted. Everything was smooth, but after a restart? MySQL complains and declines to start. So what has changed?
Checking Usual Suspects
File permissions
We checked the PEM files.
Ownership: mysql:mysql.
Permissions: 644 for the cert, 600 for the key.
We compared them against the other Galera nodes, and they were identical. This didn’t look like a permissions problem.
Is SELinux to blame here?
SELinux has ruined enough DBA time that it is one of the top spots on such checklists – but it was permissive.$ getenforce
PermissiveThat means it was logging any security issues, but not blocking. And there were no AVC denials related to MySQL or the PEM files in /var/log/audit/audit.log or dmesg!
File corruption
Did the files get corrupted/replaced during or before the MySQL restart?$ openssl x509 -in /var/lib/mysql/server-cert.pem -noout -text
# Output looked perfectly valid when compared to the output from other nodes
$ openssl rsa -in /var/lib/mysql/server-key.pem -check
RSA key okThe files were fine. They parsed cleanly. OpenSSL could read them. So why couldn’t MySQL?
More Logs review
We scanned /var/log/messages and journalctl for anything unusual around the time of the restart. No disk errors. No OOM kills. No kernel panics. Nothing that screamed “I am the Dhurandhar that’s destroyed your node.” At this point, most of the usual suspects were guilt-free, staring at us, asking, “Who did it?”
The Clue
It is good to communicate with stakeholders, and we did – “Was there any recent change on your side?” to the client, and then uttered the golden words “Last week the crypto-policy was updated on all of the DB servers to comply with PCI.”
PCI > Crypto-policy – Let’s go and check it !!$ update-crypto-policies --show
FUTUREThe system was running RHEL’s FUTURE cryptographic policy.
For those unfamiliar (including me at the time), Red Hat Enterprise Linux (and its derivatives, such as Rocky, Alma, and Oracle Linux) ships with a system-wide cryptographic policy framework. It’s a centralized way to enforce minimum standards for TLS versions, cipher suites, key lengths, and signature algorithms across all applications on the system that include OpenSS and yes, anything that links against those libraries… like MySQL.
Here’s a table that shows information about the crypto-policy levels:
Policy
RSA Minimum
TLS Minimum
SHA-1 Signatures
Use Case
LEGACY
1024-bit
TLS 1.0
Allowed
Old systems compatibility
DEFAULT
2048-bit
TLS 1.2
Allowed
Standard operations
FUTURE
3072-bit
TLS 1.2
Blocked
Forward-looking hardening
FIPS
2048-bit
TLS 1.2
Blocked
FIPS 140 compliance
So FUTURE demands a 3072-bit RSA key; otherwise, it is blocked. What do we have?
$ openssl rsa -in server-key.pem -text -noout | head -1
RSA Private Key: (2048 bit, 2 primes)2048 bits! C’mon! And now I recall the error log again… The hint was there:error:140AB18F:SSL routines:SSL_CTX_use_certificate:ee key too smallNow we have our story straight.
On restart, our PXC cluster node started a new process linked against OpenSSL, which now enforced the FUTURE policy. OpenSSL looked at the 2048-bit RSA certificate and said: “Nope. Too small.”
Fixture
The quick fix here would be to adjust the policy to DEFAULT.sudo update-crypto-policies --set DEFAULTThis will accept the current SSLs, and the node will join the cluster readily.
Alternatively, to remain compliant and adhere to the security policy strictness, the fixture will be to
Generate new certificates
Deploy the keys/certs to all Galera nodes
Perform a rolling restart
Conclusion
This was a classic case of a problem hiding at the boundary between two domains, database administration and operating system security. The DBA saw valid certificates and correct MySQL configuration. The sysadmin saw a properly hardened system with a strong crypto policy. Neither was wrong. But the intersection of their two correct configurations produced a failure.
This incident reinforces the importance of cross-domain awareness, where resolving database issues sometimes requires understanding and challenging system-level security decisions.
The post Curious case of PXC node that refused to start due to SSL appeared first on Percona.
-
Building Query Analysis and Insights Dashboard in PMM
Percona Monitoring and Management is a great open source database monitoring, observability, and management tool. Query analytics is one of the prominent features DBA uses actively to trace the incidents and query performance identification.
We all know and love the Query Analytics (QAN) dashboard… It’s the first place we look when an incident alert fires or when a developer asks, “Why is the app slow?” or “What was going on during the midnight production outage?”
But sometimes, the standard dashboards just don’t tell the whole story or maybe are not clear enough. QAN is great, but shouldn’t we have more? If you have PMM running, you already have a Ferrari engine under the hood: ClickHouse. Most of us just drive it in first gear using the default UI.
In this post, we are going to take the training wheels off. We will bypass the standard QAN interface and talk directly to the ClickHouse backend to build highly specialised dashboards. We aren’t just looking for “slow” queries anymore; we are hunting for inefficiency, volatility, and the “silent killers” that standard monitoring often misses.
This is the hands-on blog, so grab your coffee and let’s turn that PMM instance into a deep-dive forensic tool.
Create a New Dashboard in PMM
Connect to PMM > Dashboards > Create New Dashboard
Save it with name “Slow Query Analysis” and Description “Slow Query Analysis from PMM’s QAN database (clickhouse)”
Click on add visualisation & select datasource “ClickHouse”
Choose SQL Builder
Paste the following query to get top 10 slow queries from the database
SELECT fingerprint
FROM pmm.metrics
WHERE service_type = 'mysql'
AND $__timeFilter(period_start)
GROUP BY fingerprint
ORDER BY sum(m_query_time_sum) DESC
LIMIT 10
Choose “Table View” on the top to view the list
When you click “Run Query” you will see the top 10 slow queries in the chosen time period.
Let’s Save the dashboard after Panel Options updates as follows7.1 Change Panel Name and Description to: “Slow Query Analysis”7.2 Legend Placement to “Bottom”, Values to “min”,”max”, “mean”7.3 Change Axis’ Scale to “Logarithmic”Logarithmic scale on an axis compresses large ranges of data, making it ideal for visualizing metrics with vastly different magnitudes. This provides good visualisation for queries of different execution time frames.7.4 Save DashboardAlright, we’re at our first step. This first result set shows the top 10 slow query fingerprints across all MySQL services tracked by PMM for the selected time range. It provides a quick, environment-wide view of the most expensive query patterns. But this does not provide a clear picture. Let’s refine the dashboard to focus on specific queries, servers and observe their performance over time.Now, let’s introduce a variable to filter the data.
Click on Settings on Dashboard’s home page8.1 Choose “Variables” tab and click on “Add Variable”8.2 Add variable configuration and Save Dashboard
Go Back to Dashboard and Edit “Slow Query Analysis” Panel.
Now you should see the Query ID filter on the top.
Change the query to the following
SELECT
period_start AS time,
left(fingerprint, 80) AS query_text,
sum(m_query_time_sum/m_query_time_cnt) AS query_time
FROM
pmm.metrics
WHERE
service_type = 'mysql'
AND $__timeFilter(period_start)
AND fingerprint IN (
SELECT fingerprint
FROM pmm.metrics
WHERE service_type = 'mysql'
AND $__timeFilter(period_start)
AND ($queryid = '' OR queryid = $queryid)
GROUP BY fingerprint
ORDER BY sum(m_query_time_sum) DESC
LIMIT 10
)
GROUP BY
time,
fingerprint
ORDER BY
time,
query_time DESC
Basically the query is fetching start time, query text and average query time for the selected period for the top 10 Queries in that time-frame.
There is a filter for the “queryid” variable which you may use if you want to filter on a specific queryid.
Choose “Time Series” as “Query Type”
Adjust Panel Options11.1 Choose “Standard options” > “Unit” as “Time / Seconds (s)” from drop down.11.2 Choose “Standard options” > “Display name” as “${__field.labels.query_text}”11.3 Click on “Save Dashboard”
Your dashboard should be ready
Now, by default this dashboard is plotting top 10 queries. If you have a query fingerprint handy, you may be able to filter the search by that specific query. That said, this is still plotting queries across all the monitored instances. Let’s move on to add the service_name filter.
Adding service_name filter
Add Variable
Create new variable named “service_name”
Use variable type “Query”
Use Data Source as “ClickHouse”
Query:
select distinct service_name from pmm.metrics where service_type = 'mysql';
Unselect all checkboxes in “Selection options”
Save Dashboard
Update Query
SELECT
period_start AS time,
left(fingerprint, 80) AS query_text,
sum(m_query_time_sum/m_query_time_cnt) AS query_time
FROM
pmm.metrics
WHERE
(service_name = '' OR service_name = '$service_name')
AND service_type = 'mysql'
AND $__timeFilter(period_start)
AND fingerprint IN (
SELECT fingerprint
FROM pmm.metrics
WHERE service_type = 'mysql'
AND $__timeFilter(period_start)
AND (service_name = '' OR service_name = '$service_name')
GROUP BY fingerprint
ORDER BY sum(m_query_time_sum) DESC
LIMIT 10
)
GROUP BY
time,
left(fingerprint, 80)
ORDER BY
time,
query_time DESCI know many of you are naturally curious and enjoy experimenting with PMM and Grafana… So you’ve probably already started thinking about how far this can be taken. Feel free to share your ideas or custom dashboards in the comments.
Sample Dashboards:
The Query Analysis and Insights Dashboard
Okay, for those who are looking to have quick results, I’ve prepared the complete Query Analysis and Insights Dashboard for you to import and use instantly.
By importing the JSON file, you’ll get the full working dashboard with all panels preconfigured, including:
Slow Query Analysis
Latency Distribution Heatmap
Query Volatility (P99 vs Average)
Lock Wait Ratio Over Time (Top Contended Queries)
Temporary Table Usage (Disk & Memory)
Query Efficiency (Rows Examined vs Rows Sent)
Error Rate vs Throughput
Workload Distribution by User
Query Volume by Client Host
Execution Time vs Lock Wait Time
This allows you to instantly explore PMM Query Analytics data, adjust time ranges and filters, and correlate query performance, contention, and workload behavior without recreating the dashboard from scratch.
Dashboard JSON available here:
Grafana: https://grafana.com/grafana/dashboards/24896
GitHub: https://github.com/Percona-Lab/pmm-dashboards/query_analysis_insights.json
Give it a go and let me know if you have suggestions or requests. Also consider sharing if you create something interesting.
Cheers.
The post Building Query Analysis and Insights Dashboard in PMM appeared first on Percona.
|