Showing entries 351 to 360 of 1333
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Open Source (reset)
Keynoting at OpenSQLCamp-Froscon next week

Speaking of conferences, in general, and OpenSQLCamps in particular, there is one a week from now, and I will be speaking! It is organized as a single room track at Froscon, Germany, by Felix Schupp (Blackray/Softmethod) and Volker Oboda (Primebase). The content is mostly a collection of database related talks originally submitted via the main Froscon call for papers. (In other words, unlike many previous camps, the schedule is all set.)

I'm a little excited about this one, because for the first time in my career as speaker I will be giving the keynote. The title of my talk is

How I learned to use SQL and how I learned not to use it

read more

Call for disclosure on MySQL Conference 2012

Percona has announced Percona Live MySQL Conference and Expo 2012. Kudos for their vision and entrepreneurship. I have seen comments praising their commitment to the community and their willingness to filling a void. I have to dot a few i's and cross some t's on this matter.
That was not the only game in town.By the end of June, there were strong clues that O'Reilly was not going to organize a conference. The question of who could fill the void started to pop up. The MySQL Council started exploring the options for a community-driven conference to replace the missing one. The general plan was along the lines of "let's see who is in, and eventually run a conference without the big organizer. If nobody steps up, the IOUG can offer a venue in Las Vegas for an independent MySQL conference". The plan required general …

[Read more]
Real-time streaming data aggregation

Dear Kettle users,

Most of you usually use a data integration engine to process data in a batch-oriented way.  Pentaho Data Integration (Kettle) is typically deployed to run monthly, nightly, hourly workloads.  Sometimes folks run micro-batches of work every minute or so.  However, it’s lesser known that our beloved transformation engine can also be used to stream data indefinitely (never ending) from a source to a target.  This sort of data integration is sometimes referred to as being “streaming“, “real-time“, “near real-time“, “continuous” and so on.  Typical examples of situations where you have a never-ending supply of data that needs to be processed the instance it becomes available are JMS (Java Message Service), RDBMS log sniffing, on-line fraud analyses, web or application …

[Read more]
From Open Source to SaaS

I'm about to take a week off from my new gig as COO at Zendesk and it got me reflecting on the company and my decision to join.  I stayed with MySQL through the Sun acquisition and left when Oracle acquired Sun.  Although I have a lot of respect for Oracle, it seemed to me the only interesting jobs would be those that report directly to Larry Ellison.  So I took some time off to travel, worked as an EIR at Scale Ventures for a few months and began thinking about what I wanted to do next.

I turned down offers from companies and investors to come in and "repeat the MySQL playbook" in Big Data or NoSQL or apps or whatever.  I think Open Source can be a fantastic …

[Read more]
Planned change in Maatkit & Aspersa development

I’ve just sent an email to the Maatkit discussion list to announce a planned change to how Maatkit (and Aspersa) are developed. In short, Percona plans to create a Percona Toolkit of MySQL-related utilities, as a fork of Maatkit and Aspersa. I’m very happy about this change, and I welcome your responses to that thread on the discussion list.

Related posts:

  1. Aspersa, a new opensource toolkit
  2. Four companies to sponsor Maatkit development
  3. How …
[Read more]
DFWUUG Talk July 7th

The Dallas / Fort Worth Unix User Group asked me to present on Open Source BI tools on July 7th. They meet 7PM at IBM Innovation Center at 13800 Diplomat Drive (see website for details) and will serve pizza! All are welcome, see you there!


[Read more]
PDI Loading into LucidDB

By far, the most popular way for PDI users to load data into LucidDB is to use the PDI Streaming Loader. The streaming loader is a native PDI step that:

  • Enables high performance loading, directly over the network without the need for intermediate IO and shipping of data files.
  • Lets users choose more interesting (from a DW perspective) loading type into tables. In particular, in addition to simple INSERTs it allows for MERGE (aka UPSERT) and also UPDATE. All done, in the same, bulk loader.
  • Enables the metadata for the load to be managed, scheduled, and run in PDI.

However, we’ve had some known issues. In fact, until PDI 4.2 GA and LucidDB 0.9.4 GA it’s pretty problematic unless you run through the process of patching LucidDB outlined on this page: …

[Read more]
0.9.4 did not hit the 1 year mark!

Our last LucidDB release was now, just more than 12 months ago on June 16, 2010. We were really really trying to beat the 1 year mark for our 0.9.4 release but we just couldn’t. A tenet of good, open source development is early and often and we need to do better. Since the 0.9.3 release we’ve:

  • Built out an entire Web Services infrastructure
  • Developed a wicked cool Admin user interface
  • Developed cool connectors to Hive, CouchDB
  • Built a whole ton of extensions (auto indexing, DDL generation, improved load routines)
  • Scriptable …
[Read more]
HPCC vs Hadoop at a glance

Update

Since this article was written, HPCC has undergone a number of significant changes and updates. This addresses some of the critique voiced in this blog post, such as the license (updated from AGPL to Apache 2.0) and integration with other tools. For more information, refer to the comments placed by Flavio Villanustre and Azana Baksh.

The original article can be read unaltered below:

Yesterday I noticed this tweet by Andrei Savu: . This prompted me to read the related GigaOM article and then check out the HPCC Systems …

[Read more]
Open Core or Solutions: Choosing the Right Open Source Product Architecture



Today, more and more proprietary software vendors are choosing to go Open Source. Doing this enables them to leverage the community benefits of Open Source, shorten the sales cycle, and gain a competitive advantage over other proprietary products.

However, for those firms considering a switch to Open Source, there are some hard decisions to make with regard to their product architecture. Should they provide only a single Open Source product, and earn revenue from add-on services like support and consulting (RedHat)? Or should they adopt the Open Core model, offering their product under both Open Source and proprietary licenses (MySQL)? Or …

[Read more]
Showing entries 351 to 360 of 1333
« 10 Newer Entries | 10 Older Entries »