There was an idea. An idea to make Vitess self-reliant. An idea to get rid of the friction between Vitess and external fault-detection-and-repair tools. An idea that gave birth to VTOrc… Both VTOrc and Orchestrator are tools for managing MySQL instances. If I were to describe these tools using a metaphor, I would say that they are kinda like the monitor of a class of students. They are responsible for keeping the MySQL instances in check and fixing them up in case they misbehave, just like how a monitor ensures that no mischief happens in the classroom.
10 Older Entries »
Here at Pythian we get a lot of exposure to new technologies and implementation strategies via the work we do internally and for our clients. The most noteworthy technology stack that I’ve seen get a lot of traction in the MySQL community recently is the high availability stack including Orchestrator, Consul and ProxySQL.
I won’t dive too deeply into the details of this implementation as there are several blog posts that our team has submitted on this topic, but the key thing I want you to keep in mind for this particular topic is the usage of Consul as a “source of truth” for the state of your MySQL replication clusters. If Orchestrator or its adjacent scripts are running as expected, Consul should always have the latest information pertaining to the state of your cluster. This is incredibly valuable. In fact, …[Read more]
While testing in an orchestrator lab I saw that none of my Orchestrator on-raft nodes were coming online after a reboot.
This is the status report from SystemD.
$ sudo systemctl status orchestrator * orchestrator.service - orchestrator: MySQL replication management and visualization Loaded: loaded (/etc/systemd/system/orchestrator.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Fri 2020-04-03 09:30:05 UTC; 30s ago Docs: https://github.com/github/orchestrator Main PID: 957 (code=exited, status=1/FAILURE) Apr 03 09:30:05 orchestrator-1 systemd: Started orchestrator: MySQL replication management and visualization. Apr 03 09:30:05 orchestrator-1 orchestrator: 2020-04-03 09:30:05 ERROR dial tcp 127.0.0.1:3306: connect: connection refused Apr 03 09:30:05 orchestrator-1 orchestrator: 2020-04-03 09:30:05 FATAL dial tcp 127.0.0.1:3306: connect: connection refused Apr 03 09:30:05 orchestrator-1 systemd: …[Read more]
In the MySQL ecosystem,
orchestrator is the most popular and
well-respected high availability and topology management tool,
integrating well with other solutions such as ProxySQL. It facilitates automatic (or manual)
discovery, refactoring and recovery of a replicated MySQL
environment, and comes complete with both command-line (CLI)
and web interfaces for both humans and machines to interact
As we all know, humans are prone to errors and as such accidents
can happen, particularly when humans and computers interact with
each other! Recently, one of these situations related to the web
orchestrator during topology
refactoring with its drag-and-drop capabilities, where a drop
occurred unintentionally and thus had an impact on replication.
When …[Read more]
In this post, I am going to show you how to run Orchestrator on FreeBSD. The instructions have been tested in FreeBSD 11.3 but the general steps should apply to other versions as well.
At the time of this writing, Orchestrator doesn’t provide FreeBSD binaries, so we will need to compile it.
Preparing the Environment
The first step is to install the prerequisites. Let’s start by installing git:
[vagrant@freebsd ~]$ sudo pkg update Updating FreeBSD repository catalogue... Fetching meta.txz: 100% 944 B 0.9kB/s 00:01 Fetching packagesite.txz: 100% 6 MiB 492.3kB/s 00:13 Processing entries: 100% FreeBSD repository update completed. 31526 packages processed. All repositories are up to date. [vagrant@freebsd ~]$ sudo pkg install git Updating FreeBSD repository catalogue... FreeBSD repository is up to date. All repositories are up to date. New …[Read more]
Recently, I was puzzled by MySQL replication ! Some weird, but completely documented, behavior of replication had me scratching my head for hours. I am sharing this war story so you can avoid losing time like me (and also maybe avoid corrupting your data when restoring a backup). The exact justification will come in a follow-up post, so you can also scratch your head trying
In this post, we will explore one approach to MySQL high availability with ProxySQL, Consul and Orchestrator.
This is a follow up to my previous post about a similar architecture but using HAProxy instead. I’ve re-used some of the content from that post so that you don’t have to go read through that one, and have everything you need in here.
Let’s briefly go over each piece of the puzzle:
– ProxySQL is in charge of connecting the application to the appropriate backend (reader or writer).
It can be installed on each application server directly or we can have an intermediate connection layer with one or more ProxySQL servers. The former probably makes sense if you have a small number of application servers; as the number grows, the latter option becomes more attractive. Another scenario for the …[Read more]
PORP LAB : ProxySQL/Orchestrator/Replication/PMM Summary PORP Lab will create 4 different nodes. Each node will have below packages/applications/db installed.
app -- Percona Server 5.7 -- Percona Toolkit -- Percona XtraBackup -- Sysbench -- ProxySQL -- Orchestrator -- PMM mysql1 / mysql2 / mysql3 -- Percona Server 5.7 -- Percona Toolkit -- pmm-client -- Replication
PORP LAB have ProxySQL,Orchestrator and PMM properly configured,
we can just create this lab and use it.
Install Vagrant plugin hostmanager
vagrant plugin install vagrant-hostmanager
Update Vagrant Plugin
vagrant plugin update
Clone the repo
git clone …[Read more]
In the context of providing managed WordPress hosting services, at Presslabs we operate with lots of small to medium-sized databases, in a DB-per-service model, as we call it. The workloads are mostly reads, so we need to efficiently scale that. The MySQL® asynchronous replication model fits the bill very well, allowing us to scale horizontally from one server—with the obvious availability pitfalls—to tens of nodes. The next release of the stack is going to be open-sourced.
As we were already using Kubernetes, we were looking for an operator that could automate our DB deployments and auto-scaling. Those available were doing synchronous replication using MySQL group replication or Galera-based replication. Therefore, we decided to write our own operator.
The …[Read more]
Orchestrator is a MySQL high availability and replication management tool. In this blog post, we will cover the first steps for getting started with it on an existing topology.
The code examples assume you are running Centos 7, but the general steps should be similar if you are running other operating system versions/flavors.
1. Create a MySQL user on each of your database servers.
Orchestrator will connect with this user to discover the topology and to perform any changes you tell it to make.
CREATE USER 'orchestrator'@'%' IDENTIFIED BY '****'; GRANT SUPER, PROCESS, REPLICATION SLAVE, RELOAD ON *.* TO 'orchestrator'@'%'; GRANT SELECT ON mysql.slave_master_info TO 'orchestrator'@'%'; GRANT SELECT ON meta.* TO 'orchestrator'@'%';
Note: Orchestrator reads replication credentials stored in mysql.slave_master_info table, which implies you need to set up your servers with master_info_repository = …[Read more]
10 Older Entries »