Sharing keys, passphrases with applications is problematic, especially with regard to encrypting data. Too often applications are developed where “the keys are left in the door” or at best “under the mat” – hard coded, in a clear text property file… exposed and vulnerable. …
This is the late blog post about 2 recent bug reports
#85969
#85971
The basic idea came after reading -> http://mysqlserverteam.com/the-mysql-8-0-1-milestone-release-is-available/
So the result of test ->
After each restart of MySQL the new undo log files
are going to be created + keeping old files.
shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ du -hs 6.4G # The count of undo files shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ ls | grep undo | wc -l 539
After new restart:
# New count shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ ls | grep undo | wc -l 616
So how to …
[Read more]If you look into the key elements of replication, then the very basic element is Binary log or binlog. Over the period of time we have made efforts to improve the management of this quintessential element of replication. To keep up with our raising standards and requirements coming from the global MySQL community we have introduced two new features to help you manage your binary logs more efficiently.…
With the introduction of MySQL InnoDB Cluster we also got the
MySQL Shell (mysqlsh) interface. The shell offers scripting in
Javascript (default), SQL or Python. This offers a lot more
options for writing scripts on MySQL, for example it is much
easier now to use multiple server connections in a single
script.
A customer recently asked for a way to compare the transaction
sets between servers. That is useful when setting up replication
or identifying the server that has most transactions applied
already. So I wrote this little script which can be executed from
the OS shell:
#!/usr/bin/mysqlsh -f
// it is important to connect to the X protocol port,
// usually it is the traditional port + "0"
//
var serverA="root:root@localhost:40010"
var serverB="root:root@localhost:50010"
shell.connect(serverA)
var gtidA=session.sql("SELECT @@global.gtid_executed").execute().fetchOne()[0]
shell.connect(serverB) …
[Read more]
The MariaDB project is pleased to announce the immediate availability of MariaDB 10.1.24, and MariaDB Connector/C 2.3.3. See the release notes and changelogs for details. Download MariaDB 10.1.24 Release Notes Changelog What is MariaDB 10.1? MariaDB APT and YUM Repository Configuration Generator Download MariaDB Connector/C 2.3.3 Release Notes Changelog About MariaDB Connector/C Thanks, and enjoy […]
The post MariaDB 10.1.24 and Connector/C 2.3.3 now available appeared first on MariaDB.org.
In this blog post, we’ll look at how Percona XtraDB Cluster maintenance mode uses ProxySQL to take cluster nodes offline without impacting workloads.
Percona XtraDB Cluster Maintenance Mode
Since Percona XtraDB Cluster offers a high availability solution, it must consider a data flow where a cluster node gets taken down for maintenance (through isolation from a cluster or complete shutdown).
Percona XtraDB Cluster facilitated this by introducing a maintenance mode. Percona XtraDB Cluster maintenance mode reduces the number of abrupt workload failures if a node is taken down using ProxySQL (as a load balancer).
The central idea is delaying the core node action …
[Read more]MySQL InnoDB Cluster (or only Group Replication) becomes more and more popular. This solution doesn’t attract only experts anymore. On social medias, forums and other discussions, people are asking me what it the best way to migrate a running environment using traditional asynchronous replication [Master -> Slave(s)] to InnoDB Cluster.
The following procedure is what I’m actually recommending. These steps have for objective to reduce the downtime to the minimum for the database service.
We can divide the procedure in 9 steps:
- the current situation
- preparing the future cluster
- data transfert
- replication from current system
- creation of the cluster with a single instance
- adding instances to the cluster
- configure the router
- test phase
- pointing the application to the new solution
…
[Read more]Hello again everybody.
Well, I promised it a couple of weeks ago, and I’m sorry it has been so long (I’ve been working on other fun stuff in addition to this). But I’m pleased to say that we now have a fully working applier that takes data from an incoming THL stream, whether that is Oracle or MySQL, and converts that into a JSON document and message for distribution over a Kafka topic.
Currently, the configuration is organised with the following parameters:
- The topic name is set according to the incoming schema and table. You can optionally add a prefix. So, for example, if you have a table ‘invoices’ in the schema ‘sales’, your Kafka topic will be sales_invoices, or if you’ve added a prefix, ‘myprefix_schema_table’.
- Data is marshalled into a JSON document as part of the message, and the structure is to have a bunch of metadata and then an embedded record. You’ll see an …
Hello again everybody.
Well, I promised it a couple of weeks ago, and I’m sorry it has been so long (I’ve been working on other fun stuff in addition to this). But I’m pleased to say that we now have a fully working applier that takes data from an incoming THL stream, whether that is Oracle or MySQL, and converts that into a JSON document and message for distribution over a Kafka topic.
Currently, the configuration is organised with the following parameters:
- The topic name is set according to the incoming schema and table. You can optionally add a prefix. So, for example, if you have a table ‘invoices’ in the schema ‘sales’, your Kafka topic will be sales_invoices, or if you’ve added a prefix, ‘myprefix_schema_table’.
- Data is marshalled into a JSON document as part of the message, and the structure is to have a bunch of metadata and then an embedded record. You’ll see an …
If you like to read (a lot) and you're considering to migrate
your database workloads to AWS, this might be something for you.
Nearly 75 pages of ideas for planning, executing, and
troubleshooting database migrations to Amazon Aurora.
I recently published an Aurora Migration Handbook in the form of
an AWS Whitepaper. The document can be downloaded from
here:
https://d0.awsstatic.com/whitepapers/Migration/amazon-aurora-migration-handbook.pdf
Happy reading!