MySQL is the number one open source relational database management system in the world, and is used by millions of developers across all application types. DigitalOcean, a fast-growing cloud provider that’s increasing in popularity amongst the developer community, is a great host to consider for your MySQL deployments. In this article, we’re going to show […]
3 Older Entries »
I was recently deploying a few Aurora RDS instances, a process very similar to configuring a regular RDS instance. I noticed a few minor differences in the way you configure Aurora RDS parameters, and very few articles on how the commands should be structured (for RDS as well as Aurora). The only real literature available is the official Amazon RDS documentation.
This blog provides a concise “how-to” guide to quickly change Aurora RDS parameters using the AWS CLI. Aurora retains the parameter group model introduced with RDS, with new instances having the default read only parameter groups. For a new instance, you need to create and allocate a new parameter group (this requires a DB reboot). After that, you can apply changes to …[Read more]
Well, currently I am into the third week of mongodb node course
"M101JS: MongoDB for Node.js Developers" and I am pretty
Lots of personal learning into node and mongodb.
The third week subject of "Patterns, Case Studies & Tradeoffs" is really interesting.
Here is a list of topics, I learned about:
- Mongodb rich documents concept.
- Mongodb schema use cases.
- Mongodb one:one, one:many, many:many use cases.
- How to select schema based on the usage like whether you want max performance
or it may be a tradeoff.
One important point, I learned during the course is:
"While relational databases usually go for the normalised 3rd form so that data usage is agnostic to application, but mongodb schema arrangement is very closely related to application usage and varies accordingly."
In our previous post, we introduced the MySQL Fabric utility and said we would dig deeper into it. This post is the first part of our test of MySQL Fabric’s High Availability (HA) functionality.
Today, we’ll review MySQL Fabric’s HA concepts, and then walk you through the setup of a 3-node cluster with one Primary and two Secondaries, doing a few basic tests with it. In a second post, we will spend more time generating failure scenarios and documenting how Fabric handles them. (MySQL Fabric is an extensible framework to manage large farms of MySQL servers, with support for high-availability and sharding.)
Before we begin, we recommend you read this post by Oracle’s …[Read more]
Everybody working on Unix or in the database world stumbles over
Oracle Berkeley DB every now and then. DB is an
Open Source embedded database used by applications like OpenLDAP
or Postfix. Traditionally it followed mostly a key-value access
pattern. Now what caught my attention was the fact that the
recently released DB 5.0 provides an SQLite-like C API with the promise of
providing better concurrency and performance than regular SQLite.
Time to give it a shot.
So I grabbed the source distribution, checked the documentation and saw that I shall use the …[Read more]
If you expand a database connection node in the Services window of the IDE, you'll notice a new look. What are all of these nodes under the connection nodes? They're schemas. For the most part, if you're using Java DB, the only schema you'll need to worry about is the app schema. I'd be interested in knowing what developers have used the other schemas for.
When you expand the MySQL conect node, what you get is a list of databases you've created in MySQL. These database are actually schemas you've created in your MySQL database.
There you have it.
I'd been doing some stress testing of my mysql application today,
and I was hitting some weird cases. Several transactions were
deadlocking - this was expected - but the number of records that
got inserted into my table was more than the number that I
expected after subtracting errors.
My test was fairly simple:
- Fork 15 processes
- Insert and update 100 records in each process, running each INSERT/UPDATE pair inside one transaction
- ROLLBACK on error
Either the INSERT or the UPDATE was expected to fail due to
deadlock, and the whole transaction should have rolled back
leaving no record in the table.
Before I go on, I should mention that I was using InnoDB, which does support transactions.
What I expected was that the total number of records in the table + the total number of INSERT/UPDATE aborts due to deadlock should be equal to 1500 (15*100). What …
I had some fun yesterday with some odd performance problems. So I did a run with oprofile and got this:
561612 25.0417 /lib64/tls/libc-2.3.4.so memset 429457 19.1491 /usr/lib/debug/lib/modules/2.6.9-34.ELsmp/vmlinux clear_page 214268 9.5540 /usr/lib/debug/lib/modules/2.6.9-34.ELsmp/vmlinux do_page_fault 144293 6.4339 /usr/lib/debug/lib/modules/2.6.9-34.ELsmp/vmlinux do_no_page 94410 4.2097 /usr/lib/debug/lib/modules/2.6.9-34.ELsmp/vmlinux buffered_rmqueue 64998 2.8982 /lib64/tls/libc-2.3.4.so memcpy 59565 2.6559 /usr/lib/debug/lib/modules/2.6.9-34.ELsmp/vmlinux __down_read_trylock 59369 2.6472 /usr/lib/debug/lib/modules/2.6.9-34.ELsmp/vmlinux handle_mm_fault 47312 2.1096 /usr/lib/debug/lib/modules/2.6.9-34.ELsmp/vmlinux free_hot_cold_page 39161 1.7462 /usr/lib/debug/lib/modules/2.6.9-34.ELsmp/vmlinux release_pages 39140 …[Read more]
The NDB/Connectors have added support for Ruby, as well as Asynchronous Transaction support for Java, Python and Perl.
The Ruby support, of course, means that new you can interact with your MySQL Cluster installation using the NDBAPI from all your Ruby code.
The async stuff is especially cool, because it means you can send transactions to the Cluster and get responses by way of callbacks defined in the connector language. So you can do something like this:
class testaclass(object): def __init__(self, recAttr): self.recAttr=recAttr def __call__(self, ret, myTrans): print "value = ", self.recAttr.get_value() #snip myTrans = myNdb.startTransaction() myOper = myTrans.getNdbOperation("mytablename") myOper.readTuple(ndbapi.NdbOperation.LM_Read) myOper.equal("ATTR1", 245755 ) myRecAttr= myOper.getValue("ATTR2") a = testaclass(myRecAttr) …[Read more]
Here’s a very rough pre-release of NdbObject, an ORM mapping for python that maps Objects to NDB directly with no SQL code.
3 Older Entries »