Showing entries 2461 to 2470 of 44130
« 10 Newer Entries | 10 Older Entries »
How to copy a MySQL user to OCI MDS ?

When you migrate to MySQL Database Service on Oracle Cloud Infrastructure (MDS on OCI), the easiest, fastest and recommended way it to use MySQL Dump & Load Utility.

For more information check these different links:

[Read more]
MySQL NDB Cluster Backup/Restore Challenge

Hey, dolphins! Ready to test your NDB backup and restore skills?

Q1: You have a large database which takes 3 hours to back up. Insert/update/delete traffic will run during the backup. How do you run a backup so that none of the inserts/updates/deletes which are executed after the start of the backup are reflected in the backup files?…

Tweet Share

Table Partitioning In MySQL NDB Cluster And What’s New (Part II)

Whats new in NDB Cluster 7.5 version

In this version, users have more flexible ways of table partitioning rather than the default way thru ldm. Now user can partition the table either by node or by ldm. There are 4 different ways of table partitioning supported, these are:

  • FOR_RP_BY_NODE
  • FOR_ RA_BY_NODE
  • FOR_RP_BY_LDM (Default)
  • FOR_RA_BY_LDM
    • FOR_RA_BY_LDM_X_2
    • FOR_RA_BY_LDM_X_3
    • FOR_RA_BY_LDM_X_4

From the above RA is for Read from any replica i.e either from Primary replica or backup replica and RP is for Read from Primary replica only. The above options user can give either thru create table or from alter table sql statement in the COMMENT section like below.

mysql> create table t1(col1 int unsigned not null primary key …

[Read more]
Analyst Report: Oracle Cranks up the Heat in the MySQL Cloud Market

Bringing High Performance Analytics to MySQL                                                                                                                                                                                                                             
Author: Tony Baer, Principal, dblnsight 

                  …

[Read more]
Analyst Report: Oracle Cranks up the Heat in the MySQL Cloud Market

Bringing High Performance Analytics to MySQL Author: Tony Baer, Principal, dblnsight Main takeaway: “The MySQL landscape needed a shakeup. Until now, it was considered the default go-to open source database for online transaction processing (OLTP) for the scenarios not requiring the more sophisticat...

RonDB, automatic thread configuration


This blog introduces how RonDB handles automatic thread configuration. This blog is more technical and dives deeper under the surface of how RonDB operates. RonDB provides a configuration option, ThreadConfig, whereby the user can have full control over the assignment of threads to CPUs, how the CPU locking is to be performed and how the thread should be scheduled.

However for the absolute majority of users this is too advanced, thus the managed version of RonDB ensures that this thread configuration is based on best practices found over decades of testing. This means that every user of the managed version of RonDB will get access to a thread configuration that is optimised for their particular VM size.


In addition RonDB makes use of adaptive CPU spinning in a way that limits the power usage, but still provides very low latency in all database operations. …

[Read more]
How to Build Percona Server for MySQL From Sources

Lately, the number of questions about how to build Percona software has been increased. More and more people try to add their own patches, add some modifications, and build software by themselves. But this raises the question of how to do this in the same way as Percona does, as sometimes the compiler flag can make a drastic impact on the final binary.

First of all, let’s talk about the stages of compiling software.

I would say that at the beginning you need to prepare the build environment, install all the needed dependencies, and so on. For each version, the dependency list would be different. How do you get the correct dependency list? You can get all build requirements from the spec file (on rpm-based systems) or from the control file( on deb-based systems).

The next stage is to get the source code of Percona …

[Read more]
Oracle’s InnoDB Cluster - High Noon with Tungsten Clustering for MySQL High Availability (HA), Disaster Recovery (DR) and Geographic Distribution

This is the fifth blog in our ‘High Noon’ comparison series in which we look at the main solutions for MySQL high availability, disaster recovery and geographic distribution. Here we focus on highly available, geo-scale, multi-region MySQL for mission-critical sites and apps with Oracle’s InnoDB Cluster as compared to MySQL clusters with Continuent Tungsten, the only complete, fully-integrated clustering solution for MySQL - on-premises, in the cloud, hybrid-cloud or multi-cloud.

Tags: innodb clusterhigh availability (HA)MySQLdisaster recovery (DR)

[Read more]
New MySQL Entity Framework Core packages for the Connector/NET Provider at NuGet

Hello MySQL Connector/NET community,

Starting with the 8.0.23 release, our provider for Entity Framework Core has a new name. The main goal is to keep support for the different versions of Microsoft Entity Framework Core and to ensure those versions remain tighly coupled with our releases. Also, this new naming is more specific regarding the purpose of the package. Hence the Data part of the name was removed.

Before:

    MySql.Data.EntityFrameworkCore v8.0.x

Now:

    MySql.EntityFrameworkCore v8.0.x

Now that Microsoft maintains more than a single version of Entity Framework Core, we needed to find a way to name our packages and maintain the correlation between the versions of Entity Framework Core and MySQL. So that’s when we came up with using the metadata of the packages. The package version now consists of two parts, the …

[Read more]
#WDILTW – Creating examples can be hard

This week I was evaluating AWS QLDB. Specifically the verifiable history of changes to determine how to simplify present processes that perform auditing via CDC. This is not the first time I have looked at QLDB so there was nothing that new to learn.

What I found was that creating a workable solution with an existing application is hard. Even harder is creating an example to publish in this blog (and the purpose of this post).

First some background.

Using MySQL as the source of information, how can you leverage QLDB? It’s easy to stream data from MySQL Aurora, and it’s easy to stream data from QLDB, but it not that easy to place real-time data into QLDB. AWS DMS is a good way to move data from a source to a target, previously my work has included MySQL to MySQL, MySQL to Redshift, and MySQL to Kinesis, …

[Read more]
Showing entries 2461 to 2470 of 44130
« 10 Newer Entries | 10 Older Entries »