Planet MySQL Planet MySQL: Meta Deutsch Español Français Italiano 日本語 Русский Português 中文
Showing entries 1 to 10 of 146 10 Older Entries

Displaying posts with tag: hadoop (reset)

2015: More innovation, but still a year of transition
+0 Vote Up -3Vote Down

First things first: I could use this title for every year, it is an evergreen. In order for this title to make sense, there must be a specific context and in this case the context is Big Data. We have seen new ideas and many announcements in 2014, and in 2015 those ideas will shape up and early versions of innovative products will start flourishing.Like many other people, I prepared some comments and opinions to post back in early January then, soon after the season’s break, I started flying around the world and the daily routine kept me away from the blog for some time. So, as a good last blogger, it may be time for me to post my own predictions, …

  [Read more...]
On Hadoop RDBMS. Interview with Monte Zweben.
+0 Vote Up -0Vote Down

“HBase and Hadoop are the only technologies proven to scale to dozens of petabytes on commodity servers, currently being used by companies such as Facebook, Twitter, Adobe and Salesforce.com.”–Monte Zweben.

Is it possible to turn Hadoop into a RDBMS? On this topic, I have interviewed Monte Zweben, Co-Founder and Chief Executive Officer of Splice Machine.

RVZ

Q1. What are the main challenges of applications and operational analytics that support real-time, interactive queries on data updated in real-time for Big Data?

…  [Read more...]
Exorcising the CAP Demon
+0 Vote Up -0Vote Down

Computer science is like an enormous tool box you can rummage through whenever you have a problem to solve. Most of the tools are sturdy and practical, like algorithms for B-trees. Some are also elegant, like consistent hashing in Dynamo. Finally there are some tools that you never quite figure out even after years of reflection. That piece of steel you are looking at could be Excalibur. Or it could be a rusty knife.

The CAP theorem falls into the last category, at least for me.  It was a major topic in the blogosphere a few years ago and Google Trends shows …

  [Read more...]
New Continuent Tungsten 3.0 Combines Power of Highly Available Open Source DBMS with Real-Time Analytics
+0 Vote Up -0Vote Down

Business Wire  Oracle Open World 2014, Booth # 430- Continuent, Inc., a leading provider of open source database clustering and replication solutions, today announced Continuent Tungsten 3.0, a powerful solution that combines advanced clustering and replication technologies to meet the transaction processing and analytic needs of the entire business. Continuent Tungsten 3.0 enables constant,

Sneak Peek: Continuent Tungsten 3.0
+0 Vote Up -0Vote Down

Get a preview of the next advance in data management technology!  Continuent Tungsten 3.0 brings the power of advanced clustering and replication to offer data management needs for your entire business including MySQL high availability, disaster recovery, multi-master operation, and real-time data warehouse loading. With Continuent Tungsten you can apply the full power not just of MySQL but all

Replicating from MySQL to Amazon Redshift
+0 Vote Up -1Vote Down

Continuent is delighted to announce an exciting Continuent Tungsten feature addition for MySQL users: replication in real-time from MySQL into Amazon RedShift.  

In this webinar-on-demand we survey Continuent Tungsten capabilities for data warehouse loading, then zero in on practical details of setting up replication from MySQL into RedShift.  We cover:

Introduction to real-time movement

Resources for Database Clusters: Performance Tuning for HAProxy, Support for MariaDB 10, Technical Blogs & More
+0 Vote Up -0Vote Down

August 28, 2014 By Severalnines Check Out Our Latest Resources for MySQL, MariaDB & MongoDB Clusters

 

Here is a summary of resources & tools that we’ve made available to you in the past weeks. If you have any questions on these, feel free to contact us!

 

New Technical Webinars

 

  [Read more...]
Hadoop BoF Session at OSCON
+0 Vote Up -0Vote Down

I have a BoF session next week at OSCON next week:

Migrating Data from MySQL and Oracle into Hadoop

The session is at 7pm Tuesday night – look for rooms D135 and/or D137/138.

Correction: We are now in  E144 on Tuesday with the Hadoop get together first at 7pm, and the Data Migration to follow at 8pm.

I’m actually going to be joined by Gwen Shapira from Cloudera, who has a BoF session on Hadoop next door at the same time, along with Eric Herman from Booking.com. …

  [Read more...]
Making Real-Time Analytics a Reality — TDWI -The Data Warehousing Institute
+0 Vote Up -0Vote Down

My article on how to make the real-time processing of information from traditional transactional stores into Hadoop a reality has been published over at TDWI:

Making Real-Time Analytics a Reality — TDWI -The Data Warehousing Institute.


Big Data Integration & ETL - Moving Live Clickstream Data from MongoDB to Hadoop for Analytics
+1 Vote Up -0Vote Down

June 16, 2014 By Severalnines

MongoDB is great at storing clickstream data, but using it to analyze millions of documents can be challenging. Hadoop provides a way of processing and analyzing data at large scale. Since it is a parallel system, workloads can be split on multiple nodes and computations on large datasets can be done in relatively short timeframes. MongoDB data can be moved into Hadoop using ETL tools like Talend or Pentaho Data Integration (Kettle).

 

In this blog, we’ll show you how to integrate your MongoDB and Hadoop datastores using Talend. We have a MongoDB database collecting …

  [Read more...]
Showing entries 1 to 10 of 146 10 Older Entries

Planet MySQL © 1995, 2015, Oracle Corporation and/or its affiliates   Legal Policies | Your Privacy Rights | Terms of Use

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.