Posted by & filed under HowTo.

At MongoDB World last month MongoDB founder and CTO Eliot Horowitz announced support for pluggable storage engines scheduled for the 2.8 release. This is exciting stuff as it means mongo users will now be able to choose a storage engine that best suits their workload and with the API planned to have full support of All MongoDB features, while not having to give up any of the current functionality that they enjoy. Not only that, but nodes in the same replica set will be able to use different storage engines, enabling all sorts of interesting configurations for varying needs.

The great thing about MongoDB being fully open source is that we don’t have to wait until 2.8 is actually released to play around with these very experimental features. The entirety of the MongoDB source code can be cloned from github and compiled to include any experimental features currently being worked on.

In the example below I’ll show you how to build mongo with the rocksdb example storage engine presented at MongoDB world.

Starting from a freshly installed CentOS 6.5 cloud instance, we’ll grab the basic dependancies:

yum groupinstall 'Development Tools'; yum install git glibc-devel scons

Next will get the MongoDB source code from github:

git clone https://github.com/mongodb/mongo.git

Now all that’s left is to compile the source with RocksDB support enabled:

scons --rocksdb=ROCKSDB mongo mongod

Or speed it up by using the -j option to specify the number of parallel jobs to use, if you plan to dedicate the system your compiling on for the time being a good indicator is the number of cores in your machine +1, mine looked like:

scons -j 17 --rocksdb=ROCKSDB mongo mongod

It’s worth noting that pluggable storage engine support and the RocksDB engine are completely experimental at this point so there’s a good chance you’ll encounter errors and be unable to compile from master, that’s to be expected at this stage. If you’d like to keep an eye on how things are progressing the MongoDB dev mailing list is a good place to start.

Once the compile has finished you’ll want to start up a mongod process using the new –storageEngine parameter:

./mongod --storageEngine rocksExperiment

And finally you can test everything by connecting and inserting a simple document, then using db.stats(). You should see RocksDB statistics piped back to you if everything has gone as planned.

As you can see it’s fairly simple to get up and running with experimental features enabled. I’m very excited to see the pluggable storage engine code progress and see more new engines announced as we get closer to the 2.8 release.

Posted by & filed under HowTo.

MongoDB Inc. has introduced lots of great new enterprise features with release 2.6 of MongoDB, however, one thing still absent is a desktop application to manage your database. Introducing Robomongo, the cross-platform and open source MongoDB management tool. With the following instructions you’ll see how easy it is to integrate RoboMongo with your ObjectRocket MongoDB instance.

Let’s get started! First we’re going to need to note down some details from the ObjectRocket control panel:

  • Database connect string (note that the port is different for SSL vs non-SSL connections)
  • Database username & password

Download and install Robomongo for your OS of choice (at the time of writing the most current version is 0.8.4, which is the release I’m basing these instructions on).

Now open Robomongo. Initially you’ll be greeted with the MongoDB Connections box, click the Create link in the top left of the screen.

MongoDB Connections screen

 

After clicking the Create link above, you’ll see the following Connection Setting screen. I’ve named my instance ObjectRocket but you may want to use more specific naming if you have several databases.

 

In the Address field, enter the database connect string you noted down earlier. Remember that if you intend to connect via SSL, the target port will be different. Usually this is your <plain text port> + 10000, so for my example the plain text port is 23042 and the SSL port is 33042.

 

Connection Settings

 

Now select the authentication tab and add the user credentials you noted down earlier.

Authentication Tab

If you prefer to use SSL, select the SSL tab at the top and tick Use SSL Protocol. ObjectRocket doesn’t currently support SSL Certificates so disregard that box.

SSL Settings Now press Test to confirm the settings are correct. If everything works you should see a Diagnostic message box similar to below.

 

Diagonostic Message

Press Save to store your connection. Congratulations, you have successfully connected a great desktop MongoDB management application to your ObjectRocket instance!

But what if you’re using strict ACLs and you work from several locations or your home broadband does not have a static IP? You will have to keep adding your local (changing) public IP address to your instance ACLs in the ObjectRocket control panel before you can work with Robomongo.

Another method is to configure Robomongo to connect to your instance via a (Linux) server with a static IP (for example: one of your application servers, or a cloud server created to act as a proxy) using a SSH tunnel. The following instructions will guide you through the process.

 

First create yourself a user on a Linux server that has a static public IP. If this is not a server that is already allowed access via your ACL rule set, then remember to add this server’s IP address to your instance ACLs.

 

Generate a SSH public/private key pair and install the public part to the Linux server that will be our proxy host, an excellent article on how to configure SSH keys can be found here.

 

Now configure Robomongo to use our SSH proxy host and key.

SSH Connection Settings

Test your connection again, if the test completes without error press save to store your connection settings. You have successfully configured Robomongo to access your ObjectRocket instance via a proxy host over SSH.

 

 

 

Posted by & filed under HowTo.

JSONStudio and ObjectRocket, A match made in Java.

 

If you have ever worked with MySQL then you have probably used tools like PHPMyAdmin or MySQL Workbench to interface with the database and run ad-hoc queries or generate reports.  These tools have been around for a long time and have matured over time to become valuable tools for the day to day interaction with MySQL.  If you have ever searched for similar products for MongoDB then you should definitely take a look at JSONStudio by jSonar Inc.  It is a web based front end to interact with any MongoDB implementation and offers features like query generation, reporting and even data visualization.  JSONStudio is not just one tool but actually a suite of many different tools under one unified dashboard and I must say it’s list of features are impressive.  The best part about this suite of tools is that it interfaces seamlessly with any ObjectRocket MongoDB instance.

 

To get started, head on over to http://jsonstudio.com/evaluate/ and download the free evaluation copy of the tool.  I installed the version for Mac OS X but if you have Linux or Window those packages are listed as well.  The installation guide can be found by hovering Resources in the navigation bar and selecting Guide.  This will take you to the documentation for the current version of the software.

 

Once you have the software installed and the web service up and running you should get to a screen that looks something like this:

 

JSONStudio connection pageAll the details you need to hook this up to an ObjectRocket instance can be found in the ObjectRocket Control Panel.  First log in to https://app.objectrocket.com with your ObjectRocket username and password.  Once authenticated you should see a list of instances like so:

 

OR-Demo-Instances

I am going to be connecting to my JSONStudio instance and looking specifically at my JSONTest database.  To get those connection details I first will click on my JSONStudio instance and then select the JSONTest database in the Databases section of my Instance Details page:
JSONStudio-Instance-Details

I then will need the SSL Connect String and a username from the Users section:

JSONStudio-Database-Details

With the connection details in hand we can now connect JSONStudio to the instance.  Fill in the relevant information into the login page like so:

JSONStudio-Connection-Details
Since I am connecting over SSL I need to check the use SSL box as this passes the correct flag to the driver under the hood to make a secure connection.  I also chose the Secondary Preferred option so that my search queries will favor secondaries instead of the primary.  This can help with performance if the primary is under a heavy write load, but be aware, as mentioned in the MongoDB documentation, reading from secondaries can return stale data in certain circumstances.  Another thing to note is I selected to save the information I just entered such that I can quickly connect back another time.  When you save the datasource it does not save the password, so you will have to type that every time.

 

Once you hit Login you should see a screen very similar to this:

 

JSONStudio-Dashboard-View

That should get you started with using JSONStudio by jSonar Inc. with your ObjectRocket MongoDB Instance.  If you run into any issues connecting to your instance please email support@objectrocket.com and we will be more than happy to help you get connected.  Happy querying!

Posted by & filed under ObjectRocket Features.

 

For a number of months ObjectRocket has had a handful of customers helping our team develop integration with New Relic. Offering a suite of software analytics products, New Relic helps their customers gain actionable, real-time business insights from the billions of metrics their software is producing, including user click streams, mobile activity, end user experiences and transactions.

Today, we’re excited to announce the availability of ObjectRocket’s MongoDB plugin on the New Relic Platform, giving New Relic users increased visibility into their metrics from the ObjectRocket MongoDB service.

 

Screen Shot 2014-05-06 at 4.11.16 PM

This is the first in a suite of integrations and tools that help increase the ability for customers to peer deeper into the ObjectRocket platform. The plugin is a zero-install plugin—all you need to do is drop your New Relic account key into the ObjectRocket UI, and data will automatically start flowing into New Relic. The plugin is account wide, so each of your instances will start sending data once your account key is set.

So what data does the plugin expose? Well, here is a list:

Installing the plugin

Installation is very simple.

  1. Get your New Relic account key here.
  2. Drop it into ObjectRocket here. (Be patient, it could take a few minutes.)
  3. Click on the tab in New Relic named ‘ObjRocket’.

Of course you will need accounts for both New Relic and ObjectRocket to make this all happen. Happy Graphing you data nerds, you!

Do you have a metric or class of metrics you want exposed? Hit us up, we would love to hear from you.

Posted by & filed under Features.

We have a new look and feel for our Control Panel.

A number of weeks ago we decided based on customer feedback and our own wishlist to re-write our user interface for our web control panel from the ground up. We wanted to ensure we gave our customers a clean and simple control interface for the ObjectRocket service.

Screen Shot 2014-04-29 at 1.09.31 PM

The goal of this project was to simply convert our existing UI over to the new UI. However, there where a couple of items we couldn’t resist fixing. One of them was how we represent space usage. MongoDB has a multi part storage design, and we wanted to more accurately represent how an instance’s storage usage is broken down.

The core of these changes is tied to an internal Rackspace project that enables small teams and projects to quickly and easily incorporate the mycloud.rackspace.com experience, and iterate quickly. We have been working internally with this very talented team to be the first Rackspace company to use this new UI framework. We couldn’t be more excited, and look forward to helping to push the project forward.

Some highlights of the new Control Panel are:

  • Consistent flow: Pages are organized in a logical drill down manner, and consistently implemented.
  • Space usage indicator: Graphical space usage breakdown across a cluster.
  • Cluster balancer indicator: Graphical shard balance indicator.
  • Dashboard location: Dashboards are now accessed from the instances menu and renamed to ‘Statistics’.
  • Flyouts with help on many pages.

We will be rolling out our new interface this week across the board. We hope you enjoy the new user interface, please don’t be shy about giving us feedback. If you aren’t already running your MongoDB database on ObjectRocket, sign up to check it out.

Posted by & filed under Uncategorized.

We are excited to announce Automated Online Compaction on the ObjectRocket platform for MongoDB.

Automated Online Compaction allows MongoDB instances to be compacted online and in the background on the ObjectRocket platform. The application will only experience a replica set election in order to start using the newly compacted slave. Without this feature, applications experience extended downtime when a collection is compacted or when a database is repaired.

Compactions can be scheduled, and windows defined for when the final stepDown() takes place. Users can turn on the feature and not have to worry about MongoDB fragmenting over time. The instance is kept in a nice tidy form and it’s all automated.

All databases fragment over time, some worse than others depending on the underlaying design. In a generic sense; fragmentation occurs when deletes create spaces that new data or updated data can’t reuse. MongoDB fragments just like most popular databases. Even when using Powerof2Sizes we found that we spent a large amount of our DBA time working to keep customers databases compact. We felt that if we charge base on disk space footprint, it’s only right to help customers keep that footprint tidy. But the stock commands didn’t work for us because of they require service interruptions. Anything we built had to be automated, online, work in parallel at scale, and present a minimal impact to customers.

To this end, we have been working over the last few months to build this feature, and had to release a couple other components in order to make this possible. First, we built a component that allows the user to specify a window when they would like a stepDown() to be performed. Then we needed to build out a complete state machine for MongoDB replica sets. In order for this feature to work properly, our code needed to understand the state of all replica sets in a cluster at any given point in time, understand failures, and understand how to recover. We also needed a scheduler component to allow the scheduled stepDown and compactions. We needed to ensure we took into account backups being run, balancer activity, and overall availability impact to the cluster.

With that work done, we then could build a component that performed complete compactions of a replica set in the background, and almost 100% transparent to the user and calling application. The new feature is called Automated Online Compaction for MongoDB.

Here is how it works:

  1. User requests a compaction manually or they have set a compaction schedule.
  2. Per shard, a SECONDARY is selected for compaction, and compaction starts.
  3. Repeat for all remaining SECONDARY replicas.
  4. Wait until all shards are done.
  5. Wait for stepDown window, an election takes place and the PRIMARY becomes a SECONDARY. The PRIMARY is compact at this point.
  6. Finish up by compacting the previous PRIMARY.

It should be reiterated, while the compaction is done in the background on a SECONDARY, in order to rotate it to PRIMARY an election must take place. Users must be aware of this fact, it’s best practice anyway. Greg has some good thoughts about designing for elections in client code.

Under the covers, here is what we are doing:

We track the stepDown window (if defined) for the instance:

We keep track of each replica set member and it’s state. We update the metadata so we know the state of every slave. State is “syncing” then once completed is “compressed”.

Once all the SECONDARY slaves are {state:”compressed”} we wait for the scheduled stepDown window:

And lastly, we compact the remaining SECONDARY (the previous PRIMARY):

In order to get started with Automated Online Compaction, navigate to your instances view and select the compaction button, then schedule a stepdown window on the settings page. You can optionally choose to run the compaction weekly as well. Additional information can be found in the documentation here and here as well.

Posted by & filed under Data Centers.

You asked and we listened! It’s been a few weeks since we rolled out the SYD data center and we are happy to say all is well down under!

Rackspace launched the Sydney data center in 2013, and we’re happy to share that ObjectRocket service is now available! The new data center is named “SYD” and has regional pricing just like HKG.

To provision an instance in the Sydney data center simply choose the ‘SYD’ option at creation time. Instances are built just like any other ObjectRocket instance with high performance, scalability, availability and fantastic support at it’s core.

Need help? Have questions? Tweet us at @ObjectRocket or email us at support@objectrocket.com.

Posted by & filed under ObjectRocket Features.

Managing your Access Control Lists just got a lot easier.

One way we embrace a secure-by-default approach at ObjectRocket is requiring network Access Control List (ACL) entries for every instance. While ObjectRocket ACLs can be managed via both our web UI and API, customers with large and dynamic application environments have asked for an easier way to deal with ACLs.
aws

Today we are announcing a new feature; ACLSync.

ACLSync is an automated solution for synchronizing your environments’ IP addresses with your ObjectRocket ACLs. ACLSync adds and deletes ACLs on the fly as your environment changes, saving you the trouble of manually managing ObjectRocket network access.

ACLSync is available today for the AWS EC2 platform, with support for other Cloud Service Providers coming soon.

Getting Started with ACLSync

To enable ACLSync for your EC2 environment, simply navigate to your accounts’ External Integration settings page. In the ACLSync AWS section, select the AWS region you wish to sync with, enter a valid AWS Access Key ID and Secret Access Key (we recommend creating a read-only keypair through IAM for this purpose), and click the button labeled ‘Set AWS Access Key’.

Your new ACLs should appear for all instances in your account within ten minutes, and will synchronize about every five minutes. ACL’s added by ACLSync will automatically appear in the ACL tab of your instance-details page. Each new ACL created from ACLSync will be prefixed with ‘aws-. ACLSync will keep things in sync as your AWS environment changes over time.

If you have questions, comments, or concerns please contact support.

Posted by & filed under ObjectRocket Features.

Scaling on ObjectRocket gets easier now that MongoDB shard key creation can be automated with AutoKey.

Part of our philosophy at ObjectRocket is to ensure customers have a seamless and fantastic MongoDB experience. Customers focus on the application, we take care of the database. The new AutoKey feature furthers our goal of a massively automated database as a service offering.Since our inception, ObjectRocket has had Rocketscale – an automated process that adds shards to customer instances as they grow. When a customer starts to run out of space, Rocketscale adds a shard, and the balancer starts moving chunks to the new shard. Business continues, performance stays fast, all is good. That is wonderful, however, each collection still needs to have a shard key defined. A shard key is the key in which data is split among the shards participating in a MongoDB cluster. Defining a shard key requires an understanding of the application access patterns. Sometimes it makes sense to really engineer a great key, but other times, a simple and generic key can be used. Not every collection is going to be huge, used frequently, or have specific access patterns requiring a highly engineered key.With the release of MongoDB 2.4 hashed based shard keys became available. While hashed based shard keys are not perfect for every scenario, they are a great general-purpose shard key for a large set of use cases.

Enter AutoKey

AutoKey automates the process of adding a hashed based index and shard key on collections in an ObjectRocket instance. Once you turn on AutoKey, shard keys will automatically be defined where they don’t already exist (for collections > 256MB). The AutoKey daemon  fires up periodically and checks for keys to create. Once it finds missing keys on collections it goes ahead and creates the key, the indexes, and notifies the customer through our notification interface. AutoKey operates on an entire DB, so the user can set manually define shard keys, and let AutoKey pick up the rest.The index AutoKey creates has the following simple definition:

The shard key definition is:

Getting started with AutoKey

Using AutoKey is easy. Just navigate to the instance settings pane, select an instance and click on the settings panel. Click on AutoKey to toggle it on or off. It’s that simple. If you would like to specifically define some shard keys in various collections and leave others to AutoKey that’s fine too. Simply define your shard keys as you normally would and AutoKey skips them when defining new keys.There is further reading in our docs, and as always, if you have any questions or concerns simply hit support for help.