Join us for a live webinar and download a new whitepaper where we discuss how to realize
new value from data collected during web session
management.
Session management has long been a key component of any web
infrastructure – enhancing the user browsing experience through
improved reliability, reduced latency and tighter security.
Increasingly organizations are looking to unlock more value from
session management to further improve user loyalty (i.e. making
the web service more “sticky”) and improve monetization of web
services. There are two distinct developments that offer
the promise of unlocking more value from session data:
1. Provide highly personalized browsing
experiences by …
There is a log file called ndb_<NodeID>_out.log created by the MySQL Cluster data nodes which can become quite big overtime. There is, unlike the cluster logs created by the management nodes, no rotation build in. So you have to revert to the basics and copy the file away, truncating the old one.
For example, if you want to ‘rotate’ the log file of data node with NodeID 3:
shell> mv ndb_3_out.log.1.gz ndb_3_out.log.2.gz shell> cp ndb_3_out.log ndb_3_out.log.1 shell> cat /dev/null > ndb_3_out.log shell> gzip ndb_3_out.log.1
It’s not elegant, and you might lose some entries, but it will help you keeping disk usage minimal. If you don’t need the log at all, just line 3 would do the trick.
You can use logrotate‘s copytruncate to …
[Read more]The MySQL Cluster data node log files can become very big. The best solution is to actually fix the underlying problem. But if you know what you are doing, you can work around it and filter out these annoying log entries.
An example of ‘annoying’ entries is when you run MySQL Cluster on virtual machines (not good!) and disks and OS can’t follow any more; a few lines from the ndb_X_out.log:
2011-04-03 10:52:31 [ndbd] WARNING -- Ndb kernel thread 0 is stuck in: Scanning Timers elapsed=100 2011-04-03 10:52:31 [ndbd] INFO -- timerHandlingLab now: 1301820751642 sent: 1301820751395 diff: 247 2011-04-03 10:52:31 [ndbd] INFO -- Watchdog: User time: 296 System time: 536 2011-04-03 10:52:31 [ndbd] INFO -- Watchdog: User time: 296 System time: 536 2011-04-03 10:52:31 [ndbd] WARNING -- Watchdog: …[Read more]
Unlike most other MySQL storage engines, Ndb does not perform all
of its work in the MySQLD process. The Ndb table handler maps
Storage Engine Api calls onto NdbApi calls, which eventually result in
communication with data nodes. In terms of layers, we have SQL
-> Handler Api -> NdbApi -> Communication. At each of
these layer boundaries, the mapping between operations at the
upper layer to operations at the lower layer is non trivial,
based on runtime state, statistics, optimisations etc.
The MySQL status variables can be used to understand the
behaviour of the MySQL Server in terms of user commands
processed, and also how these map to some of the Storage Engine
Handler Api calls.
Status variables tracking user commands start with …
Most people looking at a diagram showing the Cluster architecture
soon want to know if the system can scale online. Api nodes such
as MySQLD processes can be added online, and the storage capacity
of existing data nodes can be increased online, but it was not
always possible to add new data nodes to the cluster without an
initial system restart requiring a backup and restore.
An online add node and data repartitioning feature was finally
implemented in MySQL Cluster 7.0. It's not clear how often users
actually do scale their Clusters online, but it certainly is a
cool thing to be able to do.
There are two parts to the feature :
- Online add an empty data node to an existing cluster
- Online rebalance existing data across the existing and new data nodes
Adding an empty data node to a cluster sounds trivial, but is
actually fairly complex given the cluster's …
MySQL Cluster distributes rows amongst the data nodes in a
cluster, and also provides data replication. How does this work?
What are the trade offs?
Table fragments
Tables are 'horizontally fragmented' into table fragments each
containing a disjoint subset of the rows of the table. The union
of rows in all table fragments is the set of rows in the table.
Rows are always identified by their primary key. Tables with no
primary key are given a hidden primary key by MySQLD.
By default, one table fragment is created for each data node in
the cluster at the time the table is created.
Node groups and Fragment replicas
The data nodes in a cluster are logically divided into Node
groups. The size of each Node group is controlled by the
NoOfReplicas parameter. All data nodes in a Node group store the
same data. In other words, where the NoOfReplicas parameter is
two or greater, each …
On September 8, 2010 Oracle announced the availability of Oracle Solaris Cluster 3.3 Oracle Solaris Cluster 3.3, built on the solid foundation of Oracle Solaris, offers the most extensive Oracle enterprise High Availability and Disaster Recovery solutions for the largest portfolio of mission-critical applications. Integrated and thoroughly tested with Oracle's Sun servers, storage, connectivity solutions and Solaris 10 features, Oracle Solaris Cluster is now qualified with Solaris Trusted Extensions, supports Infiniband for general networking or storage usage, and can be deployed with Oracle Unified Storage in Campus Cluster configurations. It extends its applications support to new Oracle applications such as Oracle Business Intelligence, PeopleSoft, TimesTen, and MySQL Cluster. The single, integrated HA and DR solution enables multi-tier deployments in virtualized environments. In this release, Oracle Solaris Containers clusters supports even more configurations …[Read more]
On September 8, 2010 Oracle announced the availability of Oracle Solaris Cluster 3.3 Oracle Solaris Cluster 3.3, built on the solid foundation of Oracle Solaris, offers the most extensive Oracle enterprise High Availability and Disaster Recovery solutions for the largest portfolio of mission-critical applications. Integrated and thoroughly tested with Oracle's Sun servers, storage, connectivity solutions and Solaris 10 features, Oracle Solaris Cluster is now qualified with Solaris Trusted Extensions, supports Infiniband for general networking or storage usage, and can be deployed with Oracle Unified Storage in Campus Cluster configurations. It extends its applications support to new Oracle applications such as Oracle Business Intelligence, PeopleSoft, TimesTen, and MySQL Cluster. The single, integrated HA and DR solution enables multi-tier deployments in virtualized environments. In this release, Oracle Solaris Containers clusters supports even more configurations …[Read more]
On September 8, 2010 Oracle announced the availability of Oracle Solaris Cluster 3.3 Oracle Solaris Cluster 3.3, built on the solid foundation of Oracle Solaris, offers the most extensive Oracle enterprise High Availability and Disaster Recovery solutions for the largest portfolio of mission-critical applications. Integrated and thoroughly tested with Oracle's Sun servers, storage, connectivity solutions and Solaris 10 features, Oracle Solaris Cluster is now qualified with Solaris Trusted Extensions, supports Infiniband for general networking or storage usage, and can be deployed with Oracle Unified Storage in Campus Cluster configurations. It extends its applications support to new Oracle applications such as Oracle Business Intelligence, PeopleSoft, TimesTen, and MySQL Cluster. The single, integrated HA and DR solution enables multi-tier deployments in virtualized environments. In this release, Oracle Solaris Containers clusters supports even more configurations …[Read more]
When MySQL AB bought Sun Microsystems in 2008 (or did Sun buy
MySQL?), most of the MySQL team merged with the existing Database
Technology Group (DBTG) within Sun. The DBTG group had been busy
working on JavaDB, Postgres and other DB related projects as well
as 'High Availability DB' (HADB), which was Sun's name for the
database formerly known as Clustra.
Clustra originated as a University research project which spun
out into a startup company and was then acquired by Sun around
the era of dot-com. A number of technical papers describing
aspects of Clustra's design and history can be found online, and it is in many ways similar to Ndb
Cluster, not just in their shared Scandinavian roots. Both are
shared-nothing parallel databases originally aimed at the
Telecoms market, supporting high availability and horizontal
scalability. Clustra has an impressive feature set and …