Planet MySQL Planet MySQL: Meta Deutsch Español Français Italiano 日本語 Русский Português 中文
Showing entries 1 to 10 of 141 10 Older Entries

Displaying posts with tag: hadoop (reset)

Replicating from MySQL to Amazon Redshift
+0 Vote Up -1Vote Down
Continuent is delighted to announce an exciting Continuent Tungsten feature addition for MySQL users: replication in real-time from MySQL into Amazon RedShift.   In this webinar-on-demand we survey Continuent Tungsten capabilities for data warehouse loading, then zero in on practical details of setting up replication from MySQL into RedShift.  We cover: Introduction to real-time movement
Resources for Database Clusters: Performance Tuning for HAProxy, Support for MariaDB 10, Technical Blogs & More
+0 Vote Up -0Vote Down
August 28, 2014 By Severalnines

Check Out Our Latest Resources for MySQL, MariaDB & MongoDB Clusters

 

Here is a summary of resources & tools that we’ve made available to you in the past weeks. If you have any questions on these, feel free to contact us!

 

New Technical Webinars

 

  [Read more...]
Hadoop BoF Session at OSCON
+0 Vote Up -0Vote Down

I have a BoF session next week at OSCON next week:

Migrating Data from MySQL and Oracle into Hadoop

The session is at 7pm Tuesday night – look for rooms D135 and/or D137/138.

Correction: We are now in  E144 on Tuesday with the Hadoop get together first at 7pm, and the Data Migration to follow at 8pm.

I’m actually going to be joined by Gwen Shapira from Cloudera, who has a BoF session on Hadoop next door at the same time, along with Eric Herman from Booking.com. We’ll use the opportunity to talk all things Hadoop, but particularly the ingestion of data from MySQL and other databases into the Hadoop datastore.

As always, it’d be great to meet anybody interested in Hadoop at the BoF, please come along and

  [Read more...]
Making Real-Time Analytics a Reality — TDWI -The Data Warehousing Institute
+0 Vote Up -0Vote Down

My article on how to make the real-time processing of information from traditional transactional stores into Hadoop a reality has been published over at TDWI:

Making Real-Time Analytics a Reality — TDWI -The Data Warehousing Institute.


Big Data Integration & ETL - Moving Live Clickstream Data from MongoDB to Hadoop for Analytics
+1 Vote Up -0Vote Down
June 16, 2014 By Severalnines

MongoDB is great at storing clickstream data, but using it to analyze millions of documents can be challenging. Hadoop provides a way of processing and analyzing data at large scale. Since it is a parallel system, workloads can be split on multiple nodes and computations on large datasets can be done in relatively short timeframes. MongoDB data can be moved into Hadoop using ETL tools like Talend or Pentaho Data Integration (Kettle).

 

In this blog, we’ll show you how to integrate your MongoDB and Hadoop datastores using Talend. We have a MongoDB database collecting clickstream data from several websites. We’ll create a job in Talend to extract the documents from MongoDB, transform and then

  [Read more...]
theCube @ Hadoop Summit 2014 - Robert Hodges (Continuent) with John Furrier and Jeff Kelly on on real-time data loading from Oracle and MySQL into Hadoop.
+0 Vote Up -0Vote Down
The Hadoop Summit, a leading Apache Hadoop industry conference, has grown significantly over the years, and throughout the day, theCUBE, led by hosts John Furrier and Jeff Kelly, featured the best of thought leaders, use cases, data scientists, data analysts, and developers at the event. Watch yesterday's interview with Robert Hodges (CEO, Continuent) on real-time data loading from Oracle and
Using InfiniDB MySQL server with Hadoop cluster for data analytics
+1 Vote Up -0Vote Down

In my previous post about Hadoop and Impala I benchmarked performance of analytical queries in Impala.

This time I’ve tried InfiniDB for Hadoop (open-source version) on the modern hardware with an 8-node Hadoop cluster. One of the main advantages (at least for me) of InifiniDB for Hadoop is that it stores the data inside the Hadoop cluster but uses the MySQL server to execute queries. This allows for an easy “migration” of existing analytical tools. The results are quite interesting and promising.

Quick How-To

The InfiniDB documentation is not very clear on step-by-step instructions so I’ve created this

  [Read more...]
Webinar-on-Demand: Set Up & Operate Open Source Oracle Replication
+0 Vote Up -0Vote Down
Oracle's expensive and complex replication makes it difficult to build cost-effective applications that move data in real-time to data warehouses (Oracle, Hadoop, Vertica) and popular databases like MySQL. Fortunately, Continuent Tungsten offers a solution.In this virtual course, you will learn how Continuent Tungsten solves problems with Oracle replication at a fraction of the cost of other
Continuent at Hadoop Summit
+1 Vote Up -0Vote Down

I’m pleased to say that Continuent will be at the Hadoop Summit in San Jose next week (3-5 June). Sadly I will not be attending as I’m taking an exam next week, but my colleagues Robert Hodges, Eero Teerikorpi and Petri Versunen will be there to answer any questions you have about Continuent products, and, of course, Hadoop replication support built into Tungsten Replicator 3.0.

If you are at the conference, please go along and say hi to the team. And, as always, if there are any questions please let them or me know.


Webinar-on-demand: Set up & operate real-time data loading into Hadoop
+1 Vote Up -0Vote Down
Getting data into Hadoop is not difficult, but it is complex if you want to load 'live' or semi-live data into your Hadoop cluster from your Oracle and MySQL databases. There are plenty of solutions available, from manually dumping and loading to the good and bad sides of using a tool like Sqoop. Neither are easy and both prone to the problems of lag between the moment you perform the dump and
Showing entries 1 to 10 of 141 10 Older Entries

Planet MySQL © 1995, 2014, Oracle Corporation and/or its affiliates   Legal Policies | Your Privacy Rights | Terms of Use

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.