Showing entries 21 to 30 of 164
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: hadoop (reset)
Exorcising the CAP Demon

Computer science is like an enormous tool box you can rummage through whenever you have a problem to solve. Most of the tools are sturdy and practical, like algorithms for B-trees. Some are also elegant, like consistent hashing in Dynamo. Finally there are some tools that you never quite figure out even after years of reflection. That piece of steel you are looking at could be Excalibur. Or it could be a rusty knife.

The CAP theorem falls into the last category, at least for me.  It was a major topic in the blogosphere a few years ago and Google Trends shows steadily increasing interest in the term since 2010.  It's not my goal to explain CAP fully--a good informal description is …

[Read more]
New Continuent Tungsten 3.0 Combines Power of Highly Available Open Source DBMS with Real-Time Analytics

Business Wire  Oracle Open World 2014, Booth # 430- Continuent, Inc., a leading provider of open source database clustering and replication solutions, today announced Continuent Tungsten 3.0, a powerful solution that combines advanced clustering and replication technologies to meet the transaction processing and analytic needs of the entire business. Continuent Tungsten 3.0 enables constant,

Sneak Peek: Continuent Tungsten 3.0

Get a preview of the next advance in data management technology!  Continuent Tungsten 3.0 brings the power of advanced clustering and replication to offer data management needs for your entire business including MySQL high availability, disaster recovery, multi-master operation, and real-time data warehouse loading. With Continuent Tungsten you can apply the full power not just of MySQL but all

Replicating from MySQL to Amazon Redshift

Continuent is delighted to announce an exciting Continuent Tungsten feature addition for MySQL users: replication in real-time from MySQL into Amazon RedShift.  

In this webinar-on-demand we survey Continuent Tungsten capabilities for data warehouse loading, then zero in on practical details of setting up replication from MySQL into RedShift.  We cover:

Introduction to real-time movement

Resources for Database Clusters: Performance Tuning for HAProxy, Support for MariaDB 10, Technical Blogs & More

August 28, 2014 By Severalnines Check Out Our Latest Resources for MySQL, MariaDB & MongoDB Clusters

 

Here is a summary of resources & tools that we’ve made available to you in the past weeks. If you have any questions on these, feel free to contact us!

 

New Technical Webinars

 

Performance Tuning of HAProxy for Database Load Balancing

09 September 2014 - with Baptiste Assmann of HAProxy Technologies

Do you know what HAProxy can tell you about your application and database instances? Do you know the difference …

[Read more]
Hadoop BoF Session at OSCON

I have a BoF session next week at OSCON next week:

Migrating Data from MySQL and Oracle into Hadoop

The session is at 7pm Tuesday night – look for rooms D135 and/or D137/138.

Correction: We are now in  E144 on Tuesday with the Hadoop get together first at 7pm, and the Data Migration to follow at 8pm.

I’m actually going to be joined by Gwen Shapira from Cloudera, who has a BoF session on Hadoop next door at the same time, along with Eric Herman from Booking.com. We’ll use the opportunity to talk all things Hadoop, but particularly the ingestion of data from MySQL and other databases into the Hadoop datastore.

As always, it’d be great to meet anybody interested in Hadoop at the BoF, please come along and introduce yourselves, and …

[Read more]
Making Real-Time Analytics a Reality — TDWI -The Data Warehousing Institute

My article on how to make the real-time processing of information from traditional transactional stores into Hadoop a reality has been published over at TDWI:

Making Real-Time Analytics a Reality — TDWI -The Data Warehousing Institute.


Filed under: Articles Tagged: analytics, big data, data migration, databases, hadoop, mysql, …

[Read more]
Big Data Integration & ETL - Moving Live Clickstream Data from MongoDB to Hadoop for Analytics

June 16, 2014 By Severalnines

MongoDB is great at storing clickstream data, but using it to analyze millions of documents can be challenging. Hadoop provides a way of processing and analyzing data at large scale. Since it is a parallel system, workloads can be split on multiple nodes and computations on large datasets can be done in relatively short timeframes. MongoDB data can be moved into Hadoop using ETL tools like Talend or Pentaho Data Integration (Kettle).

 

In this blog, we’ll show you how to integrate your MongoDB and Hadoop datastores using Talend. We have a MongoDB database collecting clickstream data from several websites. We’ll create a job in Talend to extract the documents from MongoDB, transform and then load them into HDFS. We will also show you how to schedule this job to be executed every 5 minutes.

 

Test Case

 

We have an application …

[Read more]
theCube @ Hadoop Summit 2014 - Robert Hodges (Continuent) with John Furrier and Jeff Kelly on on real-time data loading from Oracle and MySQL into Hadoop.

The Hadoop Summit, a leading Apache Hadoop industry conference, has grown significantly over the years, and throughout the day, theCUBE, led by hosts John Furrier and Jeff Kelly, featured the best of thought leaders, use cases, data scientists, data analysts, and developers at the event. Watch yesterday's interview with Robert Hodges (CEO, Continuent) on real-time data loading from Oracle and

Using InfiniDB MySQL server with Hadoop cluster for data analytics

In my previous post about Hadoop and Impala I benchmarked performance of analytical queries in Impala.

This time I’ve tried InfiniDB for Hadoop (open-source version) on the modern hardware with an 8-node Hadoop cluster. One of the main advantages (at least for me) of InifiniDB for Hadoop is that it stores the data inside the Hadoop cluster but uses the MySQL server to execute queries. This allows for an easy “migration” of existing analytical tools. The results are quite interesting and promising.

Quick How-To

The InfiniDB documentation is not very clear on step-by-step instructions so I’ve created this quick guide:

  1. Install Hadoop cluster (minimum …
[Read more]
Showing entries 21 to 30 of 164
« 10 Newer Entries | 10 Older Entries »