Error 'Table 'EMPLOYEES.POSITION' doesn't exist' on query.
Default database: 'employees'. Query: 'ALTER TABLE
EMPLOYEES.POSITION ADD COLUMN phone VARCHAR(15)' Interesting
, table exist on slave server . But we are getting above error
frequently and unable to broken the replication because database
size is too big
We are having environment as GTID replication setup with windows
server (Master) to Ubuntu Linux machine (Slave) . When we
dig into all the findings , concluded its may be with case
sensitivity issue . Lower_case_table_names variable value is same
on both servers . But as per MySQL documents Database
and table names are not case-sensitive in Windows , but are
case-sensitive in most varieties of Unix.Column,index,stored
routine, and event names are …
Wow !!! Its easy too restore sensitive data without any fear
!!!
Its really tough sometimes , we restored the sensitive data
without knowing and test mails triggered to customer as $100
deducted from your account for purchase . Its strange scenario
when we missed to cleansing the customer data in DEV Sandbox
!!!
Yes MasKING sensitive / Credential is easy now !!! Reference
: https://github.com/kibitan/masking
Just tried simple practice for masking the paymentdb table data
with masking , Its working as expected
Step 1 : Installed latest Ruby version and masking using
below commands . Before doing install , update the server with
latest packages
rvm install ruby-2.6.3
gem install
masking
…
Problem :
We are running 5 node percona cluster on Ubuntu 16.04, and its
configured with master-slave replication. Suddenly we got an
alert for replica broken from slave server, which was earlier
configured with normal replication
We have tried to sync the data and configure the replication,
unable to fix that immediately due to huge transactions and GTID
enabled servers. So we have decided to follow with innobackupex
tool, and problem fixed in 2 hours
Followed all the steps from percona doc and shared the experience
in my environment
Steps involving to repair the broken Replication :
1.Backup master server 2.Prepare the backup 3.Restore
and Configure the Replication 4Check Replication Status
1.Backup …
Good news ! Once again, the MySQL, MariaDB & Friends Devroom has been accepted for FOSDEM’20‘s edition !!
This event is a real success story for the MySQL ecosystem; the content, the speakers and the attendees are growing every year.
FOSDEM 2020’s edition will take place 1st & 2nd February in Brussels and our MySQL, MariaDB & Friends devroom will run on Saturday 1st (may change). FOSDEM & MySQL/MariaDB is a love story started 20 years ago !
The committee selecting the content for our devroom is not yet created and if you want to be part of this experience, just send me an email (candidate at mysqlmariadbandfriends dot eu) before Oct 26th.
If you want to join the Committee you have to align with the following conditions:
- planning to be present at FOSDEM
- having a link …
We were pleased to sponsor the Percona Live Conference in Austin this year: many thanks to the Percona Team for organising a smooth conference yet again!
This is the recap of our week in Texas!
At The Conference
This year’s conference was the first one not taking place in Santa Clara, CA, but rather in Austin, TX. This turned out to be a nice choice by Percona, as it meant that open source database users who may not have travelled to California in the past, were attracted to the new location; and Austin being the new hot spot for (tech) companies at the moment, a lot of “locals” seemed to have made the choice to attend the conference. It was great to meet many new faces as a result.
As Diamond Sponsors of the conference we were of course present
with a booth in the exhibition hall, as well as with three
talks.
And while the hotel looked slightly dystopian at night, it was in fact a nice and …
[Read more](In the previous post, Part 6, we covered Replication.)
In this final blog post, we conclude our series of exploring MyRocks by taking a look at use case considerations. After all, having knowledge of how an engine works is really only applicable if you feel like you’re in a good position to use it.
Advantages of MyRocks
Let’s start by talking about some of the advantages of MyRocks.
Compression
MyRocks will typically do a good job of reducing the physical footprint of your data. As I mentioned in my previous post in this series about compression, you have the ability to configure compression down to the individual compaction layers for each column family. You also get the advantage of the fact that data isn’t updated once it’s written to disk. Compaction, which was …
[Read more]Today we’re happy to announce that we’ll be partnering with Datavail to provide solutions for continuous & highly available MySQL, Percona Server & MariaDB database operations based on Tungsten Clustering & Datavail Database Services.
Datavail is a renowned, tech-enabled data management, applications, business intelligence, and software solutions provider with a team of 700+ DBAs that look after customers’ database environments.
What are we aiming for?
Together we’re looking to continue to drive momentum in supporting rapid MySQL & MariaDB based application deployments as well as highly available and scalable database implementations for existing and future customers.
This new …
[Read more](In the previous post, Part 5, we covered Data Reads.)
In this blog post, we continue our series of exploring MyRocks mechanics by looking at the configurable server variables and column family options. In our last post, I explained at a high level how reads occur in MyRocks, concluding the arc of covering how data moves into and out of MyRocks. In this post, we’re going to explore replication with MyRocks, more specifically read-free replication.
Some of you may already be familiar with the concepts of read-free replication as it was a key feature of the TokuDB engine, which leveraged fractal tree indexing. TokuDB was similar to MyRocks in the sense that it had a pseudo log-based storage …
[Read more](In the previous post, Part 4, we covered Compression and Bloom Filters)
In this blog post, we continue on our series of exploring MyRocks mechanics by looking at the configurable server variables and column family options. In our last post, I explained at a high level how compression and bloom filtering are applied to data files as they are initially flushed from immutable memtables and are subsequently passed through the compaction process. With that being covered, we should now have a clear understanding as to how data writing works in MyRocks and can start reviewing how data read requests are handled.
The Read Process
Let’s start off by talking about how read processes are handled at the file level. When a read request comes in, the first thing it needs to do is pull the …
[Read more](In the previous post, Part 3, we covered Compaction.)
In this blog post, we continue on our series of exploring MyRocks mechanics by looking at the configurable server variables and column family options. In our last post, I explained at a high level how data moves from its initial disk-written files into the full data set structure of MyRocks using a process called compaction. In this post, we’re going to look a little closer at two important features that are leveraged as data cascades down through this compaction process: bloom filters and compression.
Bloom filters
Before we approach how bloom filters are used in MyRocks, we need to know what a bloom filter is. The short definition is that a bloom filter is a space-efficient data structure used to tell you if an …
[Read more]