Showing entries 1 to 7
Displaying posts with tag: Kafka (reset)
The Uber Engineering Tech Stack, Part II: The Edge and Beyond

The end of a two-part series on the tech stack that Uber Engineering uses to make transportation as reliable as running water, everywhere, for everyone, as of spring 2016.

The post The Uber Engineering Tech Stack, Part II: The Edge and Beyond appeared first on Uber Engineering Blog.

MaxScale: A new tool to solve your MySQL scalability problems

Ever since MySQL replication has existed, people have dreamed of a good solution to automatically split read from write operations, sending the writes to the MySQL master and load balancing the reads over a set of MySQL slaves. While if at first it seems easy to solve, the reality is far more complex.

First, the tool needs to make sure it parses and analyses correctly all the forms of SQL MySQL supports in order to sort writes from reads, something that is not as easy as it seems. Second, it needs to take into account if a session is in a transaction or not.

While in a transaction, the default transaction isolation level in InnoDB, Repeatable-read, and the MVCC framework insure that you’ll get a consistent view for the duration of the transaction. That means all statements executed inside a transaction must run on the master but, when the transaction commits or rollbacks, the following select statements on the session can be again …

[Read more]
a wild Supposition: can MySQL be Kafka ?

This is an idea which i presented at percona live 2015.

Is MySQL an avatar of Apache Kafka ?

Can it be Kafka ?

Yes, it can.

This talk takes a shot at modeling MySQL as Kafka.

PS: it’s unconventional, hence a WILD supposition

slides @

http://www.slideshare.net/jaihind213/can-mysql-bekafka

 or

http://www.percona.com/live/mysql-conference-2015/sessions/wild-supposition-can-mysql-be-kafka

Learn to stop using shiny new things and love MySQL

A good portion of the startups I meet and advise want to use the newest, hottest technology to build something that’s cool, but not technologically groundbreaking. I have yet to meet a startup building a time machine, teleporter or quantum social network that would actually require some amazing new tech. They have awesome new ideas with down-to-earth technical requirements, so I kept wondering why they choose this shiny (and risky) new stuff when all they need is a good ol’ trustworthy database. I think it’s because many assume that building the latest and greatest needs the latest and greatest!

It turns out that’s only one of three bad reasons (traps) why people go for the shiny and new. Reason two is people mistakenly assume older stuff is slow, not feature rich or won’t scale. “MySQL is sluggish,” they say. “Java is slow,” I’ve heard. “Python won’t scale,” they claim. None of it’s true.

[Read more]
Exploring message brokers

Message brokers are not regularly covered here but are, nonetheless, important web-related technologies. Some time ago, I was asked by one of our customer to review a selection of OSS message brokers and propose a couple of good candidates. The requirements were fairly simple: behave well when there’s a large backlog of messages, be able to create a cluster and in case of the failure of a node in a cluster, try to protect the data but never blocks the publishers even though that might imply data lost. Nothing fancy regarding queues and topics management. I decided to write my findings here, before I forget…

I don’t consider myself a message broker specialist and I spent only about a day or two on each so, I may have done some big mistakes configuration wise. I’ll take the blame if something is misconfigured or not used correctly.

[Read more]
Stream Processors and DBMS Persistence

High-Velocity Data—AKA Fast Data or Streaming Data—seems to be all the rage these days. With the increased adoption of Big Data tools, people have recognized the value contained in this data and they are looking to get that value in real-time instead of a time-shifted batch process that can often introduce a 6-hour (or more) delay in time-to-value.

Stream Processors and DBMS Persistence

High-Velocity Data—AKA Fast Data or Streaming Data—seems to be all the rage these days. With the increased adoption of Big Data tools, people have recognized the value contained in this data and they are looking to get that value in real-time instead of a time-shifted batch process that can often introduce a 6-hour (or more) delay in time-to-value.

High-velocity data has all of the earmarks of a big technological wave. The technology leaders are building stream processors. Venture firms are investing money in stream processing companies. And existing tech companies are jumping on the bandwagon and associating their products with this hot trend; making them buzzword compliant.

Some have asked whether high-velocity data will complement or replace Big Data. Big Data addresses pooled data, or data at rest. History tells us that there are different use cases and …

[Read more]
Showing entries 1 to 7