via GIPHY Amazon releases a new database offering every other day. It sure isn’t easy to keep up. Join 35,000 others and follow Sean Hull on twitter @hullsean. Let’s say you’re hiring a devops & you want to suss out their database knowledge? Or you’re hiring a professional services firm or freelance consultant. Whatever the … Continue reading How to interview an amazon database expert →
10 Older Entries »
With tons of new No-SQL database offerings everyday, developers & architects have a lot of options. Cassandra, Mongodb, Couchdb, Dynamodb & Firebase to name a few. Join 33,000 others and follow Sean Hull on twitter @hullsean. What’s more in the data warehouse space, you have Hadoop, which can churn through terabytes of data and get … Continue reading Will SQL just die already? →
A behind-the-scenes look at how Uber Engineering continues to develop our virtual onboarding funnel which enables hundreds of thousands of driver-partners to get on the road and start earning money with Uber.
The post How Uber Engineering Massively Scaled Global Driver Onboarding appeared first on Uber Engineering Blog.
Uber Engineering explains the technical reasoning behind its switch in database technologies, from Postgres to MySQL.
The post Why Uber Engineering Switched from Postgres to MySQL appeared first on Uber Engineering Blog.
Different types of languages deal with this “value” in diverse
ways. You can have a more comprehensive list of what
NULL can mean on this
website. What I like to think about
along the lines of invalid, as if some sort of garbage
is stored there. It doesn’t mean it’s empty, it’s just mean that
something is there, and it has no value to you.
Both databases recommend using
\N to represent
NULL values where import or exporting of data is
When …[Read more]
Leo Polovets of Susa Ventures publishes an excellent blog called Coding VC. There you can find some excellent posts, such as pitches by analogy, and an algorithm for seed round valuations and analyzing product hunt data. He recently wrote a blog post about a topic near and dear to my heart, Which Technologies do Startups […]
Data migrations always have a wide range of challenges. I recently took on a request to determine the difficulty of converting an ecommerce shop's MySQL 5.0 database to PostgreSQL 9.3, with the first (presumably "easier") step being just getting the schema converted and data imported before tackling the more challenging aspect of doing a full assessment of the site's query base to re-write the large number of custom queries that leverage MySQL-specific language elements into their PostgreSQL counterparts.
During the course of this first part, which had contained a number of difficulties I had anticipated, I hit one that I definitely had not anticipated:
ERROR: value too long for type character varying(20)
Surely, the error message is absolutely clear, but how could this possibly be? The obvious answer--that the varchar definitions were different lengths between MySQL and PostgreSQL--was sadly quite wrong (which you …[Read more]
In Part 1 of "SFTP virtual users with ProFTPD and
Rails", I introduced ProFTPD's virtual users and presented my annotated
proftpd.conf that I used to integrate virtual users with a Rails
application. Here in Part 2, I'll show how we generate virtual
user credentials, how we display them to the user, as well as our
SftpUser ActiveRecord model that does the heavy lifting.
Let's start at the top with the SFTP credentials UI. Our app's
main workflow actually has users doing most of their uploads
through a sweet Plupload widget. So, by default, the SFTP
functionality is hidden behind a simple button sitting to the
right of the Plupload widget:
The user can click that button to open the SFTP UI, or the Plupload …[Read more]
Original images from Flickr user jenniferwilliams
One of our clients, for various historical reasons, runs both MySQL and PostgreSQL to support their website. Information for user login lives in one database, but their customer activity lives in the other. The eventual plan is to consolidate these databases, but thus far, other concerns have been more pressing. So when they needed a report combining user account information and customer activity, the involvement of two separate databases became a significant complicating factor.
In similar situations in the past, using earlier versions of PostgreSQL, we've written scripts to pull data from MySQL and dump it into PostgreSQL. This works well enough, but we've updated …[Read more]
Disk I/O is frequently the performance bottleneck with relational databases. With AWS recently releasing 4,000 PIOPs EBS volumes, I wanted to do some benchmarking with pgbench and PostgreSQL 9.2. Prior to this release the maximum available I/O capacity was 2,000 IOPs per volume. EBS IOPs are read and written in 16Kb chunks with their performance limited by both the I/O capacity of the EBS volumes and the network bandwidth between an EC2 instance and the EBS network. My goal isn't to provide a PostgreSQL tuning guide, an EC2 tuning guide, or a database deathmatch complete with graphs; I'll just be displaying what kind of performance is available out-of-the-box without substantive tuning. In other words, this is an exploratory benchmark not a comparative benchmark. I would have liked to compare the performance of 4,000 PIOPs EBS volumes with 2,000 PIOPs EBS volumes, but I ran out of time so that will have to wait for a following …[Read more]
10 Older Entries »