Showing entries 1 to 10
Displaying posts with tag: log (reset)
Where are the logs for a Tungsten Cluster?

The Question Recently, a customer asked us:

Where are the logs for a Tungsten Cluster and which are the proper log files to monitor if I do a master role switch to another node?

The Answer: Part I Both basic and advanced logs are available!

The basic logs are symbolic links into the log subdirectory for each layer of the cluster.

For example, this is the listing of the default log directory, /opt/continuent/service_logs:

connector-user.log -> /opt/continuent/tungsten/tungsten-connector/log/connector-user.log
manager-user.log -> /opt/continuent/tungsten/tungsten-manager/log/manager-user.log
mysqldump.log -> /opt/continuent/tungsten/tungsten-replicator/log/mysqldump.log
replicator-user.log -> /opt/continuent/tungsten/tungsten-replicator/log/replicator-user.log
xtrabackup.log -> /opt/continuent/tungsten/tungsten-replicator/log/xtrabackup.log

As you can see, each log file is a …

[Read more]
MySQL Log Rotation

Overview

I find far too often that MySQL error and slow query logs are unaccounted for.  Setting up log rotation helps make the logs manageable in the event that they start to fill up and can help make your troubleshooting of issues more efficient.

Setup

All steps in the examples below are run as the root user. The first step is to setup a user that will perform the log rotation.  It is recommended to only give enough access to the MySQL user for the task that it is performing.

Create Log Rotate MySQL User

mysql > CREATE USER 'log_rotate'@'localhost' IDENTIFIED BY '<ENTER PASSWORD HERE>';
mysql > GRANT RELOAD,SUPER ON *.* to 'log_rotate'@'localhost';
mysql > FLUSH PRIVILEGES;</pre>

 

The next step is to setup the MySQL authentication config as root.  Here are two methods to set this up.  The first method will be the more secure method of …

[Read more]
Track and Optimize Server Connection Methods

The MySQL server supports a variety of client connection methods. To summarize: you have TCP/IP (v4 and v6) on all OSes (with or without TLS/SSL encryption), Unix Domain Sockets on Unix/Linux, and Named Pipes and/or Shared Memory on Windows.

Each of these connection methods has its own set of pros and cons: speed, security, portability, and ease-of-use.…

Efficiently writing to a log file from multiple threads

There’s a pattern I keep seeing in threaded programs (or indeed multiple processes) writing to a common log file. This is more of an antipattern than a pattern, and is often found in code that has existed for years.

Basically, it’s having a mutex to control concurrent writing to the log file. This is something you completely do not need.

The write system call takes care of it all for you. All you have to do is construct a buffer with your log entry in it (in C, malloc a char[] or have one per thread, in C++ std::string may do), open the log file with O_APPEND and then make a single write() syscall with the log entry.

This works for just about all situations you care about. If doing multi megabyte writes (a single log entry with multiple megabytes? ouch) then you may get into trouble on some systems and get partial writes (IIRC it may have been MacOS X and 8MB) and O_APPEND isn’t …

[Read more]
Tips for working with append-only databases using sharding and log structured tables

This post is structured like a series of questions and answers in a conversation.  I recently had a number of conversations that all pretty much went this same way.  If you, like others, have many basic questions about how to proceed when faced with an append-only store for the first time, well then hopefully this post will help provide some answers for you.  The post focuses on column stores, the most common append-only store, but there are others.

Why do I want to use a column store? Column stores are optimal for OLAP analysis

Column stores offer substantial performance increases for OLAP  compared to row stores.  Row stores are optimized for OLTP workloads.  While a row store can be used for OLAP, it may not perform well because a row store has to retrieve every column for a row (unless there is a covering index).  This is one of the reason’s that I’ve said that covering index allows you …

[Read more]
Dynamic General and Slow Query Log Before MySQL 5.1

This is a hack, but it is a good one. I was looking at some machines on a new client, and they had the general log turned on. I was surprised, because it was a fairly busy server, and they had had many problems with the server a few months ago. I thought perhaps they had turned on the general log to diagnose a problem and forgotten to turn it off, or something similar.

When I looked at the log on disk, I saw it was a symlink to /dev/null. They were running MySQL 5.0, so I immediately realized that it was a hack to have a general log they could “turn on” without having to restart mysqld.

On a different server, I saw the same link to /dev/null done with a slow query log.

The drawbacks to doing it this way is that MySQL still has the overhead of writing to the log file. The I/O overhead is greatly reduced because the writes are to /dev/null, but there’s still overhead from other resources such as RAM, CPU, etc.

[Read more]
New in MySQL 5.1: Sheeri’s Presentation

In a nutshell: What’s New in MySQL 5.1.

Release notes: Changes in release 5.1.x (Production).

And yes, very early on (at about two minutes in), I talk about my take on Monty’s controversial post at Oops, we did it again.

To play the video directly, go to http://technocation.org/node/663/play. To download the 146 Mb video to your computer for offline playback, go to http://technocation.org/node/663/download. The slides …

[Read more]
InnoDB logfiles

The unsung heroes of InnoDB are the logfiles. They are what makes InnoDB automatic crash recovery possible.

Database administrators of other DBMS may be familiar with the concept of a “redo” log. When data is changed, affected data pages are changed in the innodb_buffer_pool. Then, the change is written to the redo log, which in MySQL is the InnoDB logfile (ib_logfile0 and ib_logfile1). The pages are marked as “dirty”, and eventually get flushed and written to disk.

If MySQL crashes, there may be data that is changed that has not been written to disk. Those data pages were marked as “dirty” in the innodb_buffer_pool, but after a MySQL crash the innodb_buffer_pool no longer exists. However, they were written to the redo log. On crash recovery, MySQL can read the redo log (InnoDB log files) and apply any changes that were not written to disk.

That is the basic functionality of the InnoDB log files. Given this, …

[Read more]
Writing to the MySQL error log

In almost all application development situations, one needs to log debug information now and then. In almost all production systems, one needs to log serious error events somewhere too.

So, what can you do? Create a log table? Sprinkle your code with SELECT 'Debug: ...' statements?

At the UDF Repository for MySQL, we now have a solution to log messages to the MySQL error log: a user-defined function called log_error().

Currently it is all very crude: the log_error function takes one argument and writes it to the mysql_error log, appending a line …

[Read more]
Stopping the slave exactly at a specified binlog position

Catching up on some articles on the Planet MySQL feed, I just read the post by Dathan on how to promote a slave to be master by using MASTER_POS_WAIT(). The MASTER_POS_WAIT() is an excellent function that allows you to wait until the slave reaches a point at or after the given binlog position. Observe that after the statement issuing a MASTER_POS_WAIT() returns, the slave threads are still running, so this means that even if a STOP SLAVE is issued immediately after the statement with MASTER_POS_WAIT(), it is bound to move a little more before actually stopping. For Dathan's situation, this is not necessary, but wouldn't it be great if you could stop a slave at exactly the position that you want? Well, that …

[Read more]
Showing entries 1 to 10