Showing entries 81 to 90 of 292
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: TokuView (reset)
Sysbench Benchmark for MongoDB

As we continue to test our Fractal Tree Indexing with MongoDB, I’ve been updating my benchmark infrastructure so I can compare performance, correctness, and resource utilization.  Sysbench has long been a standard for testing MySQL performance, so I created a version that is compatible with MongoDB.  You can grab my current version of Sysbench for MongoDB here.

So what exactly is Sysbench?  According to the Sysbench homepage, “Sysbench is a modular, cross-platform and multi-threaded benchmark tool for evaluating OS [Operating System] parameters that are important for a system running a database under intensive load.”

  • Sysbench schema
    • 16 copies of the same collection, named sbtest1 … sbtest16, each with 10 …
[Read more]
The Last Mile for Big Data – Strata Overview with Jeff Kelly of Wikibon (Part 2)

During the second half of our CUBE discussion with Wikibon analyst Jeff Kelly at this year’s Strata Conference in Santa Clara, we talked about the tipping point for Big Data. Strata veterans could see at a glance that this year’s conference was markedly different. No longer the exclusive domain of geeks and database administrators, this year’s Strata featured some of the biggest enterprise vendors around. With heavy weight enterprise players Intel and EMC Greenplum announcing their own Hadoop distributions, big data is clearly going mainstream. Now that we know how to capture, store, access and analyze big data, what’s the next step? Listen in to hear my conversation with Jeff Kelly about taking big data down its last mile and finally putting it in the hands of business users.

Source: …

[Read more]
MySQL and MongoDB – Strata Discussion with Jeff Kelly of Wikibon (Part 1)

We had the opportunity to do a CUBE interview with Wikibon analyst Jeff Kelly at last week’s Strata Conference in Santa Clara. In the first part of our conversation, we discuss how our success in integrating Tokutek’s Fractal Tree® technology into MySQL has led us to another popular database, MongoDB. We explain the results of our recent benchmarking tests with MongoDB, which indicate that adding indexing can also improve performance for this popular NoSQL database with faster insertion rates, lower query latency and greater …

[Read more]
MongoDB + Fractal Tree Indexes = High Compression

One doesn’t have to look far to see that there is strong interest in MongoDB compression. MongoDB has an open ticket from 2009 titled “Option to Store Data Compressed” with Fix Version/s planned but not scheduled. The ticket has a lot of comments, mostly from MongoDB users explaining their use-cases for the feature. For example, Khalid Salomão notes that “Compression would be very good to reduce storage cost and improve IO performance” and Andy notes that “SSD is getting more and more common for servers. They are very fast. The problems are high costs and low capacity.” There are many …

[Read more]
NoSQL is Great, But You Still Need Indexes

I’ve said it before, and, as is the nature of these things, I’ll almost certainly say it again: your database performance is only as good as your indexes.

That’s the grand thesis, so what does that mean? In any DB system — SQL, NoSQL, NewSQL, PostSQL, … — data gets ingested and organized. And the system answers queries. The pain point for most users is around the speed to answer queries. And the query speed (both latency and throughput, to be exact) depend on how the data is organized. In short: Good Indexes, Fast Queries; Poor Indexes, Slow Queries.

But building indexes is hard work, or at least it has been for the last several decades, because almost all indexing is done with B-trees. That’s true of commercial databases, of MySQL, and of most NoSQL solutions that do indexing. (The ones that don’t do …

[Read more]
Fast Updates with TokuDB

With TokuDB v6.6 out now, I’m excited to present one of my favorite enhancements: fast updates with TokuDB. Update intensive applications can have their throughput limited by the random read capacity of the storage system. The cause of the throughput limit is the read-modify-write algorithm that MySQL uses when processing update statements. MySQL reads a row from the storage engine, applies the updates to it, and then writes the new row to the storage engine. To address this throughput limit, TokuDB uses a different update algorithm that simply encodes the update expressions of the SQL statement into tiny programs that are stored in an update Fractal Tree® message. This update message is injected into the root of the Fractal Tree index. …

[Read more]
Concurrency Improvements in TokuDB v6.6 (Part 2)

In Part 1, we showed performance results of some of the work that’s gone in to TokuDB v6.6. In this post, we’ll take a closer look at how this happened, on the engineering side, and how to think about the performance characteristics in the new version.

Background

It’s easiest to think about our concurrency changes in terms of a Fractal Tree® index that has nodes like a B-tree index, and buffers on each node that batch changes for the subtree rooted at that node. We have materials that describe this available here, but we can proceed just knowing that:

  1. To inject data into the tree, you need to store a message in a buffer at the root of the tree. These messages are moved down the tree, so you can find messages in all the internal …
[Read more]
Concurrency Improvements in TokuDB v6.6 (Part 1)

With TokuDB v6.6 out now, I’m excited to present one of my favorite enhancements: concurrency within a single index. Previously, while there could be many SQL transactions in-flight at any given moment, operations inside a single index were fairly serialized. We’ve been working on concurrency for a few versions, and things have been getting a lot better over time. Today I’ll talk about what to expect from v6.6. Next time, we’ll see why.

Summary of Results

Running multiple iiBench clients on a single MySQL instance, we see a big improvement in the cumulative insertion speed at all concurrency levels. We see a gain of 33.9% in single-threaded performance and 51.8% at 64 threads.

[Read more]
Tracking 5.3 Billion Mutations: Using MySQL for Genomic Big Data

University of Montreal Tracks Genomic Data With Tokutek’s TokuDB.

Faster insertion rates, improved scalability and agility support lab’s fast growing research database as it grows from 100s of GBs to 1 TB and beyond.

Issue addressed: MySQL database used for genomic research must be able to quickly ingest huge amounts of incoming data – hundreds of thousands of records every day. It also must be able to retrieve data quickly in response to a diverse set of research requests.

Enabling the Hunt for New Cures for Diseases by Seamlessly Processing Billions of Mutations in Epidemiology Records

The Organization: The The Philip Awadalla Laboratory is the Medical and …

[Read more]
The Results Are In!

We wanted to take a moment to say thanks to all of our customers and to the wider MySQL and MariaDB community. Today we announced a doubling of our customer base for the year ending December 31, 2012. Significant milestones over the last year included new technology and service partnerships, several awards, rapid hiring, as well as three upgrades to TokuDB®. We even dabbled in some MongoDB benchmarks. And to fuel continued growth in 2013, we secured additional venture capital funding last November.

Did You Hear? NASA Uses TokuDB for Big Data with MySQL!

To read the full press release and learn more, see here. To get started with TokuDB, …

[Read more]
Showing entries 81 to 90 of 292
« 10 Newer Entries | 10 Older Entries »