Showing entries 1 to 10 of 306
10 Older Entries »
Displaying posts with tag: Tools (reset)
Generating Identifiers – from AUTO_INCREMENT to Sequence

There are a number of options for generating ID values for your tables. In this post, Alexey Mikotkin of Devart explores your choices for generating identifiers with a look at auto_increment, triggers, UUID and sequences.

AUTO_INCREMENT

Frequently, we happen to need to fill tables with unique identifiers. Naturally, the first example of such identifiers is PRIMARY KEY data. These are usually integer values hidden from the user since their specific values are unimportant.

When adding a row to a table, you need to take this new key value from somewhere. You can set up your own process of generating a new identifier, but MySQL comes to the aid of the user with the AUTO_INCREMENT column setting. It is set as a column attribute and allows you to generate unique integer identifiers. As an example, consider the …

[Read more]
Percona Live Europe Tutorial: Elasticsearch 101

For Percona Live Europe, I’ll be presenting the tutorial Elasticsearch 101 alongside my colleagues and fellow presenters from ObjectRocket Alex Cercel, DBA, and Mihai Aldoiu, Data Engineer. Here’s a brief overview of our tutorial.

Elasticsearch® is well known as a highly scalable search engine that stores data in a structure optimized for language based searches but its capabilities and use cases don’t stop there. In this tutorial, we’ll give you a hands-on introduction to Elasticsearch and give you a glimpse at some of the fundamental concepts. We’ll cover various administrative topics like …

[Read more]
Easy and Effective Way of Building External Dictionaries for ClickHouse with Pentaho Data Integration Tool

In this post, I provide an illustration of how to use Pentaho Data Integration (PDI) tool to set up external dictionaries in MySQL to support ClickHouse. Although I use MySQL in this example, you can use any PDI supported source.

ClickHouse

ClickHouse is an open-source column-oriented DBMS (columnar database management system) for online analytical processing. Source: wiki.

Pentaho Data Integration

Information from the Pentaho wiki: Pentaho Data Integration (PDI, also called Kettle) is the component of Pentaho responsible for the Extract, Transform and Load (ETL) processes. Though ETL tools are most frequently used in data warehouses environments, PDI can also be used for other purposes:

  • Migrating data between …
[Read more]
Scale-with-Maxscale-part5 (Multi-Master)

This is the 5th blog in series of Maxscale blog, Below is the list of our previous blogs, Which provides deep insight for Maxscale and its use cases for different architectures.

[Read more]
Presentation : Handling Schema Changes Via Percona Toolkit

The schema changes in production can cause lock at time and makes the slave to lag. It is more tedious and troublesome with PXC ( Galera ) cluster which can be made smoother with Percona online schema change.

Image Courtesy : Photo by Andrew Ruiz on Unsplash

ProxySQL Series: Seamless Replication Switchover Using MHA

This is our second blog in the ProxySQL Series ( Blog I MySQL Replication Read-write Split up ). Will cover how to integrate ProxySQL with MHA to handle failover of Database servers.

We already have Master – Slave replication setup behind ProxySQL from previous blog [ProxySQL On MySQL Replication]

For this setup we have added one more node for MHA Manager , Which will keep eye on Master and Slave status.

  • node5 (172.17.0.5) , MHA Manager

ProxySQL can be greatly configured with MHA for …

[Read more]
Presentation: Ansible is our Wishbone

This presentation was made at LSPE event in Bangalore (India) held at Walmart labs on 10-03-2018. This presentation focuses how we have harnessed the power of Ansible at Mydbops.

 

 

Online Schema Change for Tables with Triggers.

In this post, We will learn how to handle online schema change if the table has triggers.

In PXC, an alter can be made directly ( TOI ) on tables with less than a 1G ( by default) , but on a 20GB or 200GB table we need some downtime to do ( RSU ).

Pt-osc is a good choice for Percona Cluster/Galera. By default percona toolkit’s pt-online-schema-change will create After “insert / update / delete” triggers for maintaining the sync between the shadow and the original table.

pt-online-schema-change process flow:

Check out the complete slides for effective MySQL administration here

If the tables has triggers already then pt-osc wont work well in …

[Read more]
Scale with Maxscale part-4 (Amazon Aurora)

This is part-4 of the Maxscale Blog series

  1. Maxscale and Galera
  2. Maxscale Basic Administration
  3. Maxscale for Replication

Maxscale started supporting Amazon Aurora lately from its version 2.1 which comes with a BSL license, we are good until we use only 3 nodes, …

[Read more]
Maxscale Data Archiving with filters Mq & Tee ( Mirror)

Introduction –

          Maxscale is an excellent Proxy from Mariadb Corporation, which providing the High Availability, Realtime RW split with replication Glaera cluster Amazon RDS Amazon Aurora, binlog streaming and many more advanced features, here in this blog we will discuss one such feature, 

In this blog post, i am going to share my recent activity with Maxscale. We had to help one of our client to archive only the DML ( CREATE & INSERT ) data into archive server from specific table.

Problem Statement –

Our client is having only one standalone ( Master ) setup and a Archive server , they need to archive one …

[Read more]
Showing entries 1 to 10 of 306
10 Older Entries »