Showing entries 15393 to 15402 of 44105
« 10 Newer Entries | 10 Older Entries »
Present a Talk at a Conference on MySQL as FOSDEM, SCaLE or SunshinePHP

Keith Larson & myself (and now Lenka Kasparova) have been the MySQL Community Team for the past few years and we have traveled to a great many conferences and spoken many hours on MySQL. But we need your help. There are conferences all over that we can not attend or in many cases we are speaking but would like others to speak too! And many conferences are in a big need for speakers such as yourself.

Three upcoming shows are excellent opportunities and are actively seeing you as a presenter.

  • FOSDEM is February 2nd and 3rd in Brussels, Belgium. There will be a special MySQL dev room and the call for papers ends on December 21st.
  • There is a new PHP conference in Florida! …
[Read more]
Shinguz: MySQL tmpdir on RAM-disk

Taxonomy upgrade extras: temporarymemory tablemyisam

MySQL temporary tables are created either in memory (as MEMORY tables) or on disk (as MyISAM tables). How many tables went to disk and how many tables went to memory you can find with:

mysql> SHOW GLOBAL STATUS LIKE 'Created_tmp%tables';
+-------------------------+----------+
| Variable_name           | Value    |
+-------------------------+----------+
| Created_tmp_disk_tables | 49094    |
| Created_tmp_tables      | 37842181 |
+-------------------------+----------+


Tables created in memory are typically faster than tables created on disk. Thus we want as many as possible tables to be created in memory.

[Read more]
Can we afford big data, or do we need smart data?

With the Big Data craze that’s sweeping the world of technology right now, I often ask myself whether we’re deficit-spending, so to speak, with our data consumption habits. I’ve seen repeated examples of being unwilling to get rid of data, even though it’s unused and nobody can think of a future use for it. At the same time, much Big Data processing I’ve seen is brute-force and costly: hitting a not-very-valuable nut with a giant, expensive sledgehammer. I think the combination of these two problems represents a giant opportunity, and I’m going to call the solution Smart Data for lack of a better word.

What’s the problem, in 25 words or less? I think it’s that we’re collecting a lot of data simply because we can. Not because we know of any good use for it, but just because it’s there.

What is the real cost of all of this data? I think we all know we’re well behind the curve in making use of it. A huge …

[Read more]
Changes in Twitter MySQL 5.5.28.t8

Earlier this week we pushed to Github the eighth iteration of Twitter MySQL. Here are some of the highlights. But, before that, a quick plug. If you are looking for a new opportunity and enjoy working on database internals, we should talk!

Bugs Fixed

  • Bug#67433: Using SET GLOBAL SQL_LOG_BIN should not be allowed Earlier in MySQL 5.5 development cycle, the SQL_LOG_BIN variable was made to be both global and session-scoped, instead of only session as it was in previous releases. We believe that usage of SQL_LOG_BIN at the global scope can be quite dangerous and offers little to no benefit, hence we made SQL_LOG_BIN once again a session-only variable, generating an error if it is used with SET GLOBAL.
  • Bug#67476: …
[Read more]
3 Methods to Extract a Subset of Your Data Using mysqldump

A few years ago I wrote a tool to extract a subset of data from a production database for use in QA and development environments. My goal was to have some real data for functional testing, but to make the data set small enough that a developer could easily download it and install it on their laptop in a few minutes.

I implemented the tool using mysqldump. As I have maintained the tool over the years I've employed a couple of different approaches, each of which I will describe in more detail below.

The first step was to identify the records I wanted to include in my subset. I created a couple of tables to store the ids of the records I wanted, and then some queries to populate those tables based on various criteria.

For example, say I want to dump the data for the 10 shortest PG rated movies in the sakila database. Here's an example of …

[Read more]
3 Methods to Extract a Subset of Your Data Using mysqldump

A few years ago I wrote a tool to extract a subset of data from a production database for use in QA and development environments. My goal was to have some real data for functional testing, but to make the data set small enough that a developer could easily download it and install it on their laptop in a few minutes.

I implemented the tool using mysqldump. As I have maintained the tool over the years I've employed a couple of different approaches, each of which I will describe in more detail below.

The first step was to identify the records I wanted to include in my subset. I created a couple of tables to store the ids of the records I wanted, and then some queries to populate those tables based on various criteria.

For example, say I want to dump the data for the 10 shortest PG rated movies in the sakila database. Here's an example of …

[Read more]
3 Methods to Extract a Subset of Your Data Using mysqldump

A few years ago I wrote a tool to extract a subset of data from a production database for use in QA and development environments. My goal was to have some real data for functional testing, but to make the data set small enough that a developer could easily download it and install it on their laptop in a few minutes.

I implemented the tool using mysqldump. As I have maintained the tool over the years I've employed a couple of different approaches, each of which I will describe in more detail below.

The first step was to identify the records I wanted to include in my subset. I created a couple of tables to store the ids of the records I wanted, and then some queries to populate those tables based on various criteria.

For example, say I want to dump the data for the 10 shortest PG rated movies in the sakila database. Here's an example of …

[Read more]
Announcing Percona XtraDB Cluster 5.5.28-23.7

Percona is glad to announce the release of Percona XtraDB Cluster on November 15th, 2012. Binaries are available from downloads area or from our software repositories.

Features:

  • Percona XtraDB Cluster has ported Twitter’s MySQL NUMA patch. This patch implements improved NUMA support as it prevents imbalanced memory allocation across NUMA nodes.
  • Number of binlog files can be restricted when using Percona XtraDB Cluster with the new …
[Read more]
SQLyog 10.4 improves usability with tens of thousands of databases or tables.

We have released SQLyog MySQL GUI 10.4. The 2 major new features are:

* Autocomplete speed/performance is drastically increased. This is the 3rd time we do this actually since autocomplete was introduced in SQLyog 5+ years ago. Our autocomplete implementation is one of the highest appreciated features of SQLyog according to a survey we did among registered users recently. However some slugginess could occur with a very large number (tens of thousands) databases on the server or a similar number of  tables or other objects in a database. Also a number of small bugs in autocomplete were fixed.

* Also when having a lot of databases, tables or other objects, operating a GUI tool may be tedious to work with (you may need to do a lot of scrolling, for instance). In this release we have implemented a filter in the Object Browser. It works like this: Database objects, where the typed …

[Read more]
MySQL Server 5.6 default my.cnf and my.ini

We've introduced a default my.cnf / my.ini file for MySQL Server that you can now see in the 5.6.8 release candidate:

# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html


[mysqld]
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M

 

# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin

 

# These are commonly set, remove the # and set as required.
# basedir = .....
# datadir = .....
# port = .....
# socket = …
[Read more]
Showing entries 15393 to 15402 of 44105
« 10 Newer Entries | 10 Older Entries »