Showing entries 41 to 50 of 148
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Technical Blog (reset)
Viewing RMAN jobs status and output

Yesterday I was discussing with a fellow DBA about ways to check the status of existing and/or past RMAN jobs. Good backup scripts usually write their output to some sort of log file so, checking the output is usually a straight-forward task. However, backup jobs can be scheduled in many different ways (crontab, Grid Control, Scheduled Tasks, etc) and finding the log file may be tricky if you don’t know the environment well.
Furthermore, log files may also have already been overwritten by the next backup or simply just deleted. An alternative way of accessing that information, thus, may come handy.

Fortunately, RMAN keeps the backup metadata around for some time and it can be accessed through the database’s V$ views. Obviously, if you need this information because your database just crashed and needs to be restored, the method described here is useless.

Backup jobs’ status and metadata

A lot of metadata about …

[Read more]
Oracle Exadata “technology that most changed his life” – says Oracle ACE & Pythian DBA Fahd Mirza.

Pythian’s Oracle ACE, Fahd Mirza appears in this month’s Community: Peer-to-Peer review “In With the New“, as published in the September/October 2011 issue of Oracle Magazine.

Fahd states that “Oracle Exadata Database Machine” has most changed his life – changing the game, and setting very high standards of performance, support, scalability, reliability and unification.

Shout out to Fahd from your peers at Pythian!

I guess there might be just a little truth to Pythian’s growing reputation as an “Oracle ACE Factory” ;), as recently mentioned by Justin Kestelyn in the May 11, 2011 OPN PartnerCast:

Please join me in congratulating Fahd by adding a …

[Read more]
Watch for Pythian speakers at upcoming Oracle Technology Days, NoCOUG, OOUG, SQLSaturday & Pythian Australia.

It’s a busy summer at Pythian, with our continuing wave of speaking sessions at upcoming community and regional industry events.

Coming to a city near you, watch for Pythian presenting hot Oracle and Microsoft SQL Server database topics:

IN CANADA:

Oracle Technology Days – Montreal
August 9, 2011 – 8:30am – 1pm, Hilton Montreal Bonaventure

Oracle Technology Days – Toronto
August 25, …

[Read more]
Why you should submit a paper for an Oracle User Group event.

In this post:

  • Introduction
  • Reasons to submit a paper for an Oracle User Group event
  • What should you talk about?

Introduction

Just a few days ago I received a reminder email from Burke Scheld for the “AUSOUG National Conference Series – Perth 2011 – Call for Papers”. I had an event-related conversation with several Oracle guys in my professional networks and the answers I received triggered this blog post. Some of the very good Oracle professionals I personally respect said “…I am not sure what I would get out of it …” or “…I haven’t done anything exciting for the last FEW MONTHS …”.
The answers I received shocked me a bit. Typically I am in the opposite situation where I have so many good things happening I would love to share with the world that I had to choose from too many topics to submit several. I am sure that I am not very different from other …

[Read more]
How to Run a Streaming Backup with innobackupex

On many of our clients, we have a need to run XtraBackup as a regular OS user. Aside from running into the issue where tar4ibd was not provided with Percona’s xtrabackup-1.6.2.tar.gz package, our main issues have been with permissions when attempting a streaming backup.

I have found the following:

  1. The user needs permissions for a temp directory to stream to/from. The my.cnf of the target database cannot be used because the user does not have permission to write to /tmp/mysql-stdout, so we set a tmpdir in a separate defaults-file.
  2. A backup target directory must be used that the user has read/write permissions to. It seems to me a target directory should not be needed for a streaming backup, …
[Read more]
Silent MyISAM Table Definition Changes and mysqldump

The other day while trying to move a schema from one MySQL server to another, I encountered a very odd issue. The schema to be moved contained both MyISAM and InnoDB tables, so the only option I had was to dump the schema using mysqldump on the source server and import it on the destination server. The dump on the source server went fine with absolutely no issues but it failed to import into the second server, and the error message was:

Can't create/write to file ‘/disk1/activity.MYI’ (Errcode: 2)


This was an extremely odd message as the data directory on the destination server was properly setup in terms of ownership and permission. The source and destination MySQL servers have been running without issues for months. Prior to the error, four tables in the dump file were imported into the destination server without any issues whatsoever. Furthermore the source and destination server have the exact same operating system …

[Read more]
Upgrade to MySQL 5.1.56 on Bacula server using 5.0.x and MyISAM tables

Hello there, it’s me again, with another blog about a DBA situation that a typical Linux Administrator may find themselves in.

In this blog, i’m going to review a recent MySQL upgrade I have done on one of the systems I am involved in administering. This is a real world example of an upgrade project, and hopefully when we’re done, there may even be an overall performance boost.

There are several reasons to perform upgrades (of any kind), for me an important one is to keep current for security and bug fixes, but general performance improvements and new features are always welcome as well.

This system is running Bacula, an open source enterprise backup system. In this particular case Bacula is configured to store data in a MySQL database. The data stored include status reports, backup content lists, lists of all the files on all the systems, schedules and other related information. While everything has been …

[Read more]
Log Buffer #222, A Carnival of the Vanities for DBAs

As the birds have started their yearly migration back to their homes from the warmer areas to the relative less cooler areas in summer, bloggers are also touching base with the technologies which they cherish most and coming back with some master strokes. This new cool edition of Log Buffer, the coolest blog carnival covering hottest topics encompass that home coming. Now Chill with Log Buffer #222!!!

Oracle:

Charles Hooper blogs about an Overly Complicated Use Case Example regarding Row Values to Comma Separated Lists.

[Read more]
Replication Issues: Never purge logs before slave catches them!!

A few days ago one of our customers contact us to inform a problem with one of their replication servers.

This server was reporting this issue:

Last_Error: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave.

After brief research we found the customer had deleted some binary logs from the master and relay logs from slave to release space since they were having space issues.

The customer requested us to get slave working again without affecting the production …

[Read more]
Handling Human Errors

Interesting question on human mistakes was posted on the DBA Managers Forum discussions today.

As human beings, we are sometimes make mistakes. How do you make sure that your employees won’t make mistakes and cause downtime/data loss/etc on your critical production systems?

I don’t think we can avoid this technically, probably working procedures is the solution.
I’d like to hear your thoughts.

I typed my thoughts and as I was finishing, I thought that it makes sense to post it on the blog too so here we go…

The keys to prevent mistakes are low stress levels, clear communications and established processes. Not a complete list but I think these are the top things to reduce the number of mistakes we make managing data infrastructure or for that matter working in any critical environment be it IT administration, …

[Read more]
Showing entries 41 to 50 of 148
« 10 Newer Entries | 10 Older Entries »