Posted by & filed under HowTo.

By: Hart Hoover & Ryan Walker

Recently, the Rackspace DevOps Automation team announced a service that sends alerts from New Relic to Rackspace support. These alerts will generate tickets for our DevOps Engineers to respond to, so our customers can sleep soundly when alerts are generated at 3 a.m. When combined with other data points collected about our customers’ environments, our Engineers will identify where issues lie and then execute the proper course of action.

While designing the infrastructure for this service, we encountered a common, but interesting problem in that we needed to limit access to Rackspace internal systems for security while still maintaining a public endpoint that New Relic could talk to. Our solution was to design a service with public API endpoints and private workers completely segregated from each other. The public API endpoints receive alerts from New Relic and pass them to an ObjectRocket Redis instance acting as a queue. Worker services run internally behind a RackConnect firewall and pull messages from the queue and create alerts.

This partitions the environments very well, but did create a problem for us with regards to log aggregation. We run an ElasticSearch/Kibana stack inside our private environment. Behind the firewall, we use fluentd to push logs directly to ElasticSearch. Outside the firewall, the EK stack can’t be reached. To solve this, we started using fluentd to push logs from our public API services to an ObjectRocket MongoDB instance. Internally, we use fluentd again to pull the logs from ObjectRocket into ElasticSearch. This gives us a single destination for all of our environment’s activities.

What is Fluentd?

Fluentd is an open source data collector that tries to structure data as JSON as much as possible. This means that you don’t have to write and maintain a bunch of scripts to get logging data in a similar format. It’s all JSON.

The power of fluentd is in its support for multiple sources and destinations. For example, you can collect data from a Twitter stream and notify you about it in IRC. There are tons of community pluginsavailable.

Using Fluentd with Docker

Using the MongoDB fluentd plugin, one can easily push logs into ObjectRocket. First, sources must be defined. Since all of our services are using Docker we have to get our container logs into fluentd. There’s a great post that complements this one on how to accomplish log aggregation with docker-gen and fluentd by Jason Wilder here. Once the fluentd container is running (and docker-gen has generated the fluentd configuration), you should have a section like this for each running container:

  type tail
  format json
  time_key time
  path /var/lib/docker/containers/c835298de6dde500c78a2444036101bf368908b428ae099ede17cf4855247898/c835298de6dde500c78a2444036101bf368908b428ae099ede17cf4855247898-json.log
  pos_file /var/lib/docker/containers/c835298de6dde500c78a2444036101bf368908b428ae099ede17cf4855247898/c835298de6dde500c78a2444036101bf368908b428ae099ede17cf4855247898-json.log.pos
  tag docker.container.c835298de6dd
  rotate_wait 5

This tails the container log, and keeps track of where it is in the log with a position file. It is important to note that the tag present in this configuration section is a fluentd tag, used to tell fluentd what to do with the data it aggregates.

Using Fluentd with MongoDB

On the public side, we tell fluentd what to do with data with a “match”. In this case, replace the variables with actual information from your ObjectRocket database in the same configuration file:

<match docker.**>
  type mongo
  database $DBNAME
  collection prod
  host $HOSTNAME
  port $PORT
  capped_size 100m
  password $MONGOPASS
  include_tag_key true

The setting include_tag_key tells fluentd to include the tag in the record for the log in MongoDB. This way we know exactly which log entry belongs to which container. Fluentd will start populating MongoDB with data, which we can then pull down on the private side of our application.

On the private side, we still use the fluentd MongoDB plugin, but this time set it as a source:

  type mongo_tail
  database $DBNAME
  collection prod
  host $HOSTNAME
  port $PORT
  password $MONGOPASS
  time_key time
  wait_time 5
  tag prod
  id_store_file /app/prod_last_id

Then, we provide a “match” for our logs to push them into ElasticSearch:

<match **>
  type forest
  subtype elasticsearch
    port 9200
    index_name fluentd
    logstash_format true
    buffer_type memory
    type_name ${tag}
    flush_interval 3
    retry_limit 17
    retry_wait 1.0
    num_threads 1

We’re also using the forest fluentd plugin which simplifies our tagging configuration across multiple environments.

Fluentd is a great way to aggregate your Docker logs across multiple hosts and push them to a MongoDB database. In our case, ObjectRocket is a way station between our public and private environments for log aggregation. Other use cases could include real-time analytics on the data you’re collecting. The best part for our team is that we don’t have to manage MongoDB, thanks to ObjectRocket’s reliability and knowledge.

About the Authors

Hart Hoover started his career at Rackspace in 2007 as a Linux Systems Administrator, providing technical support for managed dedicated server environments. He moved to the cloud in 2009 to help design and implement the Managed Cloud Servers support model, leading Rackspace to be the #1 Managed Cloud company. Hart then created and delivered cloud application and architecture training for all of Rackspace Sales. He now serves customers as an Engineer with the DevOps Automation team at Rackspace while leading San Antonio DevOps, a local meetup group. You can follow him on twitter at @hhoover.

Ryan Walker has been a Racker since 2009 and has worn many hats along the way. Starting as a Linux Systems Administrator, Ryan has since helped design and implement products such as Rackspace Managed Cloud Servers and Rackspace Deployments. Currently, he works as an Engineer with the DevOps Automation team at Rackspace providing solutions for customers and Rackers with a focus on the bleeding edge. You can follow him on twitter at @theryanwalker.

Posted by & filed under Company, Redis.

Over the past couple of months we have had a number of Rackspace customers ask us when they will have the ability to connect to their ObjectRocket for Redis ( instances over ServiceNet, and we are excited to launch this feature today in our Virginia (IAD), Dallas (DFW), Chicago (ORD) and London (LON) regions.


What is ServiceNet?

‪ServiceNet is a private, internal (although still multi-tenant) Rackspace network that supports Rackspace provided private RFC 1918 IPv4 addresses. ServiceNet IP addresses are not accessible from the public internet and are local to each data center. For example, a customer can configure their Rackspace account resources, such as Cloud Servers and ObjectRocket for Redis to utilize a ServiceNet IP address instead of the Public IP address. Any traffic that occurs between these cloud resources on the Rackspace Network does not incur bandwidth charges.


What Does This Mean?

‪Firstly, ServiceNet access for Rackspace customers is an important milestone as Redis is designed to be accessed by trusted clients inside trusted environments. This means that usually it is not a good idea to expose the Redis instance directly to the internet or, in general, to an environment where untrusted clients can directly access the Redis TCP port or UNIX socket. (Note, ObjectRocket for Redis is also available over Public IP and following best practices the product is mediated by a layer implementing ACLs, validating user input, and deciding what operations to perform against the Redis instance).


‪Secondly, ServiceNet can also improve performance for certain users. For example if you’re running Magento then you are probably using Redis for back end and session storage ( Cache is very, very chatty, so network latency is important. By offering communication over ServiceNet we expect performance impacts of having to connect to a service via Public IP to be minimized.


How Do You Create a ServiceNet Connection?

‪In this scenario, let’s consider that you have Performance Cloud Servers in DFW with ServiceNet enabled. When you create a new instance in the ObjectRocket Control Panel, you will be given the choice to choose a Public IP or ServiceNet as well as the region, in this case DFW.


‪When the new instance is provisioned, you will be presented with a connection string with your hostname, port and password. Finally, you will then need to add the ACL for the cloud server that you are connecting from. Now you have successfully configured your Cloud Server and ObjectRocket for Redis instance to communicate over ServiceNet.

Matthew Barker

Matthew Barker is a Product Manager on the Database team at Rackspace -- Overseeing ObjectRocket for Redis, a high performance, highly available & fully managed Redis datastore service & RedisToGo, a leading managed Redis datastore service.

More Posts - Website

Follow Me:

Posted by & filed under Performance, Redis.

As I am afforded the privilege of speaking with many people and companies using Redis in a variety of use cases from simple caching to multi-terabyte sized setups the one topic I am asked to address more than any other is performance. Redis is different in how you approach performance. In many, if not most, database servers you try to improve performance. With Redis the goal is to not slow it down. This is a very different approach and requires a different mindset to take advantage of it.

Performance Metrics – Is Latency King?

There are essentially two performance metrics you are primarily concerned with when using Redis: how many commands (or transactions) per second can I execute and how long do they take. When you break it down you find former is a secondary result from the latter. Since Redis is single-threaded how many ops/sec you can push is absolutely tied to how long they take. Thus, ultimately what matters is what I refer to as “command latency.”

I consider one of Redis’ advantages, it’s simplicity. You issue commands, not queries. Commands are a far simpler route to pull data, and in this case the simplicity is reflected in the speed of the commands. It also allows developers to provide optimized commands as opposed to trying to optimize for various queries. This elegant simplicity is then afforded to programmers consuming Redis. This often means doing “query” type operations such as filtering a set, on the client side rather than having the server software do it.

While some feel this type of query is best done in a consistent way on the server, I am currently of the opinion this is no longer a preferred route. The reason is that bugbear we call “scalability”. When you start running a “horizontally scalable” web service or site, for example, you’ll often find early on that this patter works fine. However, when you start handling “obscene” levels of traffic you quickly learn the database is a significant bottleneck. Shortly thereafter you learn this database, usually an SQL store such as MySQL, is not “horizontally scalable”. You can’t simply add more.

Command Latency

Of course, neither is Redis. However the difference here is that by keeping filtering logic, sorting, and anything you can’t execute in a single Redis command or at most a few commands, you aren’t loading the DB down with logic – and thus “things to process”. Redis should be used as a “data store”, not a traditional “database server”. This is the first aspect of “don’t slow Redis down” you need to grok. It should not need to be stated that once you start down the path of Lua scripting, you run the risk of net lower performance. You might not see it in development when your traffic is low to moderate. However, when you hit large scales you’ll see it. And by then the logic is too often already baked in and becomes a heavy amount of technical debt to move into application code. We all know how much priority is often afforded to clearing technical debt.

This isn’t to say Lua scripting doesn’t have a place. It merely means it should be highly scrutinized. A good rule of the wrist on when a Lua script is or isn’t a net performance loss is to compare the cost of the additional round trips you’d have to execute if you handled that logic client-side. It is not, however, how long your client code can do it. Consider it this way: if you reduce your round trip cost by 2ms, but add 3ms in script execution time you went the wrong direction.

Here we need to be keenly aware of the nature of a single-threaded server. That 3ms script is blocking potentially hundreds (or even thousands) of commands while it executes. If you split those into a (pull) -> (logic) -> (pull) sequence the server is able to process additional requests during the “(logic)” phases. By keeping the logic in client code you preserve the concurrency inherent to multi-client based system. If you need transactions or the Lua scripting then of course use them. But don’t use them because they seem to make your code “easier”. Always be aware of the concurrency performance hit, measure it, and make the choice consciously.

This leads us to the second rule of “Don’t Slow Redis Down”: preserve concurrency by avoiding server logic via scripts. A side benefit is the ability to migrate to a Redis Cluster setup without rewriting or dumping your Lua scripts which operate on multiple keys.

Other Ways Redis Can Be Slowed Down

There are a few system-level, or operational, aspects of running a Redis server which can slow Redis down. As with any server using I/O resource limits can slow Redis down. For example, if you need 8GB of network bandwidth, but have 1GB, it will be “slow” for you. If your daemon process is limited to fewer open sockets than you need concurrent connections time will be spent waiting for connections to be closed instead of executing commands.

Probably the two most common operational choices which cause Redis to slow down is to 1) put it on a virtual machine – especially a Xen hypervisor based one and 2) heavy disk persistence.

The first one is fairly well addressed in standard Redis literature: don’t put it on a Xen hypervisor VM. The second one, the persistence, appears to be but it doesn’t go far enough in my estimation.

There are, of course, the standard recommendations: use a local fast disk, ensure you have enough memory to handle the CoW dance when persisting – even running persistence on slaves only. These can indeed have effects ripple through to your command latency. What is missing is how to properly determine if they are.

One “standard”[1] way to test this is to use the inbuilt latency testing. Before I get into the technical details of how to do that I want to address the larger “how” question. The main subset here is when to run these tests. Firstly, this test is likely to be meaningless if your data set is small. How small? Determine how long it takes to dump your data to disk. If we’re talking a second or two, maybe even ten, you’re less likely to get meaningful data. Of course, this is all predicated on you doingBGSAVE rather than a SAVE.

Persistence Latency

This leads us to the first principle of what I refer to as “persistence latency” – the time from command hitting the server to the result hitting some form of persistence[2]: Find the least latency introducing persistence option you can. Depending on your network and disk either AoF or Slave server will be the lowest latency bumping persistence option, with RDB coming in last.

RDB will come in last primarily because it has an inbuilt delay of N changes over T seconds. So, assuming you have enough changes in that minimal duration (smallest default window is 60 seconds) your persistence delay is the interval T + the time to write the RDB to disk. If it takes 30 seconds to dump that memory to disk with the default 60 second interval your “persistence latency” would be (60+30) 90 seconds.

This does, however, beg the question of whether this persistence latency is a problem. There are two aspects to this question: 1) Is it sufficient for your business requirements? and 2) Does it slow down your Redis?

The first question is one I can not provide a generic answer which answers everyone. I can say, however, that if your requirements for persistence latency are tighter than the above formula reveals you’ll get the answers are likely to trend toward “use a slave and/or AoF” instead of RDB. This assumes you aren’t making obvious mistakes on the platform you run Redis on. Which brings us to the second aspect: does it slow down Redis?

For some the question of “how long does it take to save the RDB file” becomes paramount in their minds. In my view, this is a mistake. Does it matter how long it takes? The answer to this question is “does the save slow down Redis”. If not, then you shouldn’t care if it takes 1 second or 1 hour. For example, consider Server A which takes 1000 seconds to save the RDB. While not saving the intrinsic command latency range is 30-100 microseconds. During the save this latency is still in the 30-100 microsecond range. In this case, it would be premature optimization to work on reducing this RDB save time under the notion you have poor Redis performance.

However Server B takes a mere 10 seconds, but the command latency jumps from 30-100us to 130-250us. Now you have a reason to be concerned with how long that RDB file takes to generate because the save is slowing Redis down and you want to minimize that. If that means faster disks, at least now you have reason to justify them – assuming that increase of around 100-150 microseconds causes performance anxiety for your application(s). If it doesn’t, then you’re back to premature optimization.

Now, as to how to measure that latency, I’ll go into more details in part two of this post as this one has become quite long already.

  1. Standard in the sense that it is included as part of Redis, thus everyone can run the exact same test
  2. This persistence can be a slave server, an RDB save/bgsave, or an AOF write.

Bill Anderson

I am The Real Bill. I was born a Bill, not a William. I work for Rackspace as a Cloud Sites Platform Engineer - Linux in San Antonio, TX. I write significant amounts of code in Python and C++ (particularly Qt4). Working on adding Go and Erlang to my repertoire. I am also a gamer, as in tabletop RPGs (which some liken to "playing a spreadsheet") and have been a beta-tester for several D20 products.

More Posts - Website

Posted by & filed under Getting Started, HowTo, Redis.

The speed and flexibility of Redis makes it an extremely powerful tool for developers and it can be used in a variety of different ways. Although Redis is often referred to as a key-value store it is much better described as a Data Structure Server, as it also supports 5 different data structure types, namely:

  • Strings
  • Hashes
  • Lists
  • Sets
  • Sorted sets

Each structure type has shared commands as well as some commands that are specific to a particular structure type.

This introduction will cover the basics of how to use Redis and an overview of the different data structures. We will cover some of the more basic commands, though bear in mind that Redis has over 160 at the time of writing and you can find excellent documentation at


Starting Redis

Use an instance provided by ObjectRocket for Redis or install Redis locally.

To install locally, download, extract and compile the code:
$ wget
$ tar xzf redis-2.8.17.tar.gz
$ cd redis-2.8.17
$ make

Start the local instance:
$ src/redis-server


Using Redis

Now you’re ready to interact with Redis using the built-in client. For an ObjectRocket for Redis instance start the redis-cli with the hostname, port, and password:

$ redis-cli -h my-host -p 1234 -a mypassword
If you are using a local instance, the host is localhost, the default port is 6379, and there is no password by default.

$ redis-cli



At the very simplest level, Redis can be described as a key-value store. By issuing the command SET foo bar you set the value of foo to bar. For example, issue the following command from the redis-cli you just started:

redis> SET foo bar
Now read the value of foo with the GET command.
redis> GET foo
You can set keys to expire in a given amount of time by using the command, EXPIRE. TTL reports the time remaining before the key expires.
redis> SET foo bar
redis> EXPIRE foo 120
(integer) 1
redis> ttl foo
(integer) 113



One of the distinguishing features of Redis is that the values of a key can be data-structures, rather than just values.
To create a list use LPUSH or RPUSH. If a list already exists, LPUSH will add the given value to the beginning of the list and RPUSH will add it to the end.

redis> LPUSH cities “San Francisco"
(integer) 1
redis> RPUSH cities "Austin"
(integer) 2
redis> LRANGE cities 0 -1
1. "San Francisco"
2. "Austin"
redis> SORT cities alpha
1. "Austin"
2. "San Francisco"
redis> LPOP cities
"San Francisco"
redis> LRANGE cities 0 -1
1. "Austin"


The SORT command sorts the list lexicographically in ascending order with the ALPHA argument. To sort in descending order, append the DESC argument to the SORT command.
The RPOP command pops an element from the list’s end. LPOP pops an element from the list’s beginning.



Sets are similar to lists, except each element can occur only once. In the example below we create a small set of US States. The SADD command adds an item to the set, unless the item already exists in the set. If the item does not exist, it is added and 1 is returned; otherwise, 0 is returned. SMEMBERS returns all items in the set. SCARD returns the number of elements of the set. SREM removes an item from the list.

redis> sadd states "Vermont"
(integer) 1
redis> smembers states
1. "Vermont"
redis> sadd states "Texas"
(integer) 1
redis> scard states
(integer) 2
redis> sadd states "Vermont"
(integer) 0
redis> smembers states
1. "Vermont"
2. "Texas"
redis> sadd states "California"
(integer) 1
redis> smembers states
1. "Vermont"
2. "Texas"
3. "California"
redis> srem states "California"
(integer) 1
redis> smembers states
1. "Vermont"
2. "Texas"



Using hashes, you can assign and map string values to fields in each key. A hash with a few fields is stored in a way that takes very little space, so you can store millions of objects in a small Redis instance In the example below, the name of user:1 is set to “john racker” using the HSET command. The HGET command is used to get the name value of user. HGETALL returns all the keys and values related to the specified key.

redis> hset user:1 name "john racker"
(integer) 1
redis> hget user:1 name
"joe racker"
redis> hset user:1 company "objectrocket"
(integer) 1
redis> hget user:1 company
redis> hset user:1 city “austin”
(integer) 1
redis> hget user:1 city
redis> hgetall user:1
1. “name”
2. “john racker”
3. "company"
4. "objectrocket"
5. "city"
6. "austin"



Sorted Sets are similar to sets in that they are non repeating collections of strings, though every member is associated with a score. Sorted sets are sorted by their score in an ascending way. The same element only exists a single time, no repeated elements are permitted, although scores may be repeated.
In this example we’ll add the points totals of the top 5 teams in the English Premier League as of 10/23/2014

redis> ZADD EPL 22 “chelsea”
(integer) 1
redis> ZADD EPL 17 “man city”
(integer) 1
redis> ZADD EPL 16 “southampton”
(integer) 1
redis> ZADD EPL 13 “liverpool”
(integer) 1
redis> ZADD EPL 13 “west ham”
(integer) 1
redis> ZRANK EPL “chelsea”
(integer) 3
redis> ZRANK EPL “liverpool”
(integer) 0
redis> ZRANK EPL “arsenal”


Next we’ll rank the teams based on points total using ZRANGE
redis> ZRANGE EPL 0 -1
1) "liverpool"
2) "west ham"
3) "southampton"
4) "man city"
5) "chelsea"
redis> ZRANGE EPL 2 3
1) "southampton"
2) "man city"


Remember, sets are sorted in ascending order, so to see the rank of teams based on their points total we need to use ZREVRANGE

redis> zrevrange EPL 0 -1
1) "chelsea"
2) "man city"
3) "southampton"
4) "west ham"
5) "liverpool"
redis> zrevrange EPL 0 -1 withscores
1) "chelsea"
2) "22"
3) "man city"
4) "17"
5) "southampton"
6) "16"
7) "west ham"
8) "13"
9) "liverpool"
10) "13"


Next, Southampton plays a game and wins, gaining 3 points. We use ZINCRBY to increment the score, and ZREVRANGE to see how Southampton has moved into second place in the league. We can also use ZREVRANK to see the rank of Southampton. Note: The rank is 0-based, which means that the member with the highest score (Chelsea) has rank 0.

redis> zincrby EPL 3 "southampton"
redis> zrevrange EPL 0 -1 withscores
1) "chelsea"
2) "22"
3) "southampton"
4) "19"
5) "man city"
6) "17"
7) "west ham"
8) "13"
9) "liverpool"
10) "13"
redis> zrevrank EPL "southampton"
(integer) 1

1) "chelsea"
2) "southampton"
3) "man city"


In coming weeks we’ll dive deeper into Redis and some of the use cases for this wonderfully versatile developer tool. Please drop us a line if you have any questions about running Redis or would like to tell us about how you’re using Redis in your application stack.

Matthew Barker

Matthew Barker is a Product Manager on the Database team at Rackspace -- Overseeing ObjectRocket for Redis, a high performance, highly available & fully managed Redis datastore service & RedisToGo, a leading managed Redis datastore service.

More Posts - Website

Follow Me:

Posted by & filed under HowTo, MongoDB Basics.

For those of you new to using MongoDB, MongoDB space usage can seem quite confusing.  In this article, I will explain how MongoDB allocates space and how to interpret the space usage information in our ObjectRocket dashboard to make judgements about when you need to compact your instance or add a shard to grow the space available to your instance.


First, let’s start off with a brand new Medium instance consisting of a single 5 GB shard.  I am going to populate this instance with some test data ( in a database named “ocean”.  Here’s what the space usage for this instance looks like after adding some test data and creating a few indexes (for the purposes of this article, I deliberately added additional indexes that I knew would be fairly large in size relative to the test data set):

Example space usage after populating with test data and indexes

How is it that 315 MiB of data and 254 MiB of indexes means we are using 2.1 GiB out of our 5 GiB shard?  To explain, let’s begin with how MongoDB stores data on disk as a series of extents. Because our ObjectRocket instances run with the smallfiles option, the first extent is allocated as 16 MB. These extents double in size until they reach 512 MB, after which every extent is allocated as a 512 MB file. So our example “ocean” database has a file structure as follows:

# ls -lh ocean/
total 1.5G
-rw------- 1 mongodb mongodb  16M Aug 20 22:30 ocean.0
-rw------- 1 mongodb mongodb  32M Aug 20 20:44 ocean.1
-rw------- 1 mongodb mongodb  64M Aug 20 22:23 ocean.2
-rw------- 1 mongodb mongodb 128M Aug 20 22:30 ocean.3
-rw------- 1 mongodb mongodb 256M Aug 20 22:30 ocean.4
-rw------- 1 mongodb mongodb 512M Aug 20 22:30 ocean.5
-rw------- 1 mongodb mongodb 512M Aug 20 22:30 ocean.6
-rw------- 1 mongodb mongodb  16M Aug 20 22:30 ocean.ns
drwxr-xr-x 2 mongodb mongodb 4.0K Aug 20 22:30 _tmp

These extents store both the data and indexes for our database. With MongoDB, as soon as any data is written to an extent, the next logical extent is allocated. Thus, with the above structure, ocean.6 likely has no data at the moment, but has been pre-allocated for when ocean.5 becomes full. As soon as any data is written to ocean.6, a new 512 MB extent, ocean.7, will again be pre-allocated. When data is deleted from a MongoDB database, the space is not released until you compact — so over time, these data files can become fragmented as data is deleted (or if a document outgrows its original storage location because additional keys are added). A compaction defragments these data files because during a compaction, the data is replicated from another member of the replica set and the data files are recreated from scratch.

An additional 16 MB file stores the namespace, this is the ocean.ns file. This same pattern occurs for each database on a MongoDB instance. Besides our “ocean” database, there are two additional system databases on our shard: “admin” and “local”. The “admin” database stores the user information for all database users (prior to 2.6.x, this database was used only for admin users). Even though the admin database is small, we still have a 16 MB extent, a pre-allocated 32 MB extent, and a 16 MB namespace file for this database.

The second system database is the “local” database. Each shard we offer at ObjectRocket is a three-member replica set. In order to keep these replicas in sync, MongoDB maintains a log, called the oplog, of each update. This is kept in sync on each replica and is used to track the changes that need to be made on the secondary replicas. This oplog exists as a capped collection within the “local” database. At ObjectRocket we configure the size of the oplog to generally be 10% of shard size — in the case of our 5 GB shard, the oplog is configured as 500 MB. Thus the “local” database consists of a 16 MB extent, a 512 MB extent, and a 16 MB namespace file.

Finally, our example shard contains one more housekeeping area, the journal. The journal is a set of 1 – 3 files that are approximately 128 MB each in size. Whenever a write occurs, MongoDB first writes the update sequentially to the journal. Then periodically a background thread flushes these updates to the actual data files (the extents I mentioned previously), typically once every 60 seconds. The reason for this double-write is writing sequentially to the journal is often much, much faster than the seeking necessary to write to the actual data files. By writing the changes immediately to the journal, MongoDB can ensure data recovery in the event of a crash without requiring every write to wait until the change has been written to the data files. In the case of our current primary replica, I see we have two journal files active:

# ls -lh journal/
total 273M
-rw------- 1 mongodb mongodb 149M Aug 20 22:26 j._1
-rw------- 1 mongodb mongodb 124M Aug 20 22:30 j._2

MongoDB rotates these files automatically depending on the frequency of updates versus the frequency of background flushes to disk.

So now that I’ve covered how MongoDB uses disk space, how does this correspond to what is shown in the space usage bar from the ObjectRocket dashboard that I showed earlier?

  • NS value, 48 MB — the sum of the three 16 MB namespace files for the three databases I mentioned, ocean, admin, and local.
  • Data value, 315 MiB — the sum of the value reported for dataSize in db.stats() for all databases (including system databases).
  • Index value, 253.9 MiB, — the sum of the value reported for indexSize in db.stats() for all databases (including system databases).
  • Storage value, 687.2 MiB — the sum of data plus indexes for all databases plus any unreclaimed space from deletes.
  • Total File value, 2.0 GiB –  how much disk we are using in total on the primary replica. Beyond the space covered by the Storage value and the NS value, this also includes any preallocated extents but not the space used by the journal

Given these metrics, we can make some simple calculations to determine whether this instance is fragmented enough to need a compaction.  To calculate the space likely lost due to fragmentation, use the following:

100% – (Data + Indexes) / Storage

In the case of our example instance, this works out to 17% (100% – (315 MiB Data + 253.9 MiB Index) / 687.2 MiB Storage = 17%).  I would recommend compacting your instance when the fragmentation approaches 20%.

Another calculation we can do is whether we need to add a shard to this instance based on our overall space usage.  To calculate your overall space usage do the following:

(Total File / (Plan Size * number of shards)) * 100%

For our example instance, this works out to 40% ((2 GiB / 5 GiB * 1 shard) * 100% = 40%).  We generally recommend adding a shard when overall space usage approaches 80%.  If you notice your space usage reaching 80%, contact Support and we can help you add a shard to your instance.

Jeff Tharp

Customer Data Engineer at ObjectRocket, enjoys big dogs and supporting Big Data after dark.

More Posts

Posted by & filed under Customer Success Stories, HowTo, Performance.

Appboy is the world’s leading marketing automation platform for mobile apps. We collect billions of data points each month by tracking what users are doing in our customers’ mobile apps and allowing them to target users for emails, push notifications and in-app messages based on their behavior or demographics. MongoDB powers most of our database stack, and we host dozens of shards across multiple clusters at ObjectRocket.


One common performance optimization strategy with MongoDB is to use short field names in documents. That is, instead of creating a document that looks like this:


{first_name: “Jon”, last_name: “Hyman”}, use shorter field names so that the document might look like:


{fn: “Jon”, ln: “Hyman”}. Since MongoDB doesn’t have a concept of columns or predefined schemas, this structure is advantageous because field names are duplicated on every document in the database. If you have one million documents that each have a “first_name” field on them, you’re storing that string a million times. This leads to more space per document, which ultimately impacts how many documents can fit in memory and, at large scale, may slightly impact performance, as MongoDB has to map documents into memory as it reads them.


In addition to collecting event data, Appboy also lets our customers store what we call “custom attributes” on each of their users. As an example, a sports app might want to store a user’s “Favorite Player,” while a magazine or newspaper app might store whether or not a customer is an “Annual Subscriber.” At Appboy, we have a document for each end user of an app that we track, and on it we store those custom attributes alongside fields such as their first or last name. To save space and improve performance, we shorten the field names of everything we store on the document. For the fields we know in advance (such as first name, email, gender, etc.) we can do our own aliasing (e.g., “fn” means “first name”), but we can’t predict the names of custom attributes that our customers will record. If a customer decided to make a custom attribute named “supercalifragilisticexpialidocious,” we don’t want to store that on all their documents.


To solve this, we tokenize the custom attribute field names using what we call a “name store.” Effectively, it’s a document in MongoDB that maps values such as “Favorite Player” to a unique, predictable, very short string. We can generate this map using only MongoDB’s atomic operators.


The name store document schema is extremely basic: there is one document for each customer, and each document only has one array field named “list.” The idea is that the array will contain all the values for the custom attributes and the index of a given string will be its token. So if we want to translate “Favorite Player” into a short, predictable field name, we simply check “list” to see where it is in the array. If it is not there, we can issue an atomic push to add the element to the end of the array (db.custom_attribute_name_stores.update({_id:


X, list: {$ne : “Favorite Player”}}, {$push: {list: “Favorite Player”}})), reload the document and determine the index. Ideally, we would have used $addToSet, but $addToSet does not guarantee ordering, whereas $push is documented to append to the end by default.


So at this point, we can translate something like “Favorite Player” into an integer value. Say that value is 1. Then our user document would look like


{fn: “Jon”, ln: “Hyman”, custom: {“1″ : “LeBron James”}}. Field names are short and tidy! One great side effect of this is that we don’t have to worry about our customers using characters that MongoDB can’t support without escaping, such as dollar signs or periods.


Now, you might be thinking that MongoDB cautions against constantly growing documents and that our name store document can grow unbounded. In practice, we have extended our implementation slightly so we can store more than one document per customer. This lets us put a reasonable cap on how many array elements we allow before generating a new document. The best part is that we can still do all this atomically using only MongoDB! To achieve this, we add another field to each document called “least_value.” The “least_value” field represents how many elements have been added to previous documents before this one was created. So if we see a document with “least_value” 100 and a “list” of [“Season Ticket Holder”, “Favorite Player”], then the token value for “Favorite Player” is 101 (we’re using zero-based indexing). In this example, we are only storing 100 values in the “list” array before creating a new document. Now, when inserting, we modify the push slightly to operate on the document with the highest “least_value” value, and also ensure that “list.99” does not exist (meaning that there is nothing in index 99 in the “list” array). If an element already exists at that index, the push operation will do nothing. In that case, we know we need to create a new name store document with a “least_value” equal to the total number of elements that exist across all the documents. Using an atomic $findAndModify, we can create the new document if it does not exist, fetch it back and then retry the $push again.


If our customer has more than just a few custom attributes, reading back all the name store documents to translate from values to tokens can be expensive in terms of bandwidth and processing. However, since the token value of a given field is always the same once it has been computed, we cache the tokens to speed up the translation.


We’ve applied the “name store token” paradigm in various parts of our application to cut down on field name sizes while continuing to use a flexible schema. It can also be helpful for values. Let’s say that a radio station app stores a custom attribute that is an array of the top 50 performing artists that a user listens to. Instead of having an array with 50 strings in it, we can tokenize the radio station names and store an array of 50 integers on the user instead. Querying users who like a certain artist now involves two token lookups: one for the field name and one for the value. But since we cache the translation from value to token, we can use a multi-get in our cache layer to maintain a single round-trip to the cache when translating any number of values.


This optimization certainly adds some indirection and complexity, but when you store hundreds of millions of users like we do at Appboy, it’s a worthwhile optimization. We’ve saved hundreds of gigabytes of expensive SSD space through this trick.


Want to learn more? I’ll be discussing devops at Appboy during the Rackspace Solve NYC Conference on Sept 18th at the Cipriani .


Jon Hyman

Jon Hyman is the Cofounder and CIO of Appboy, Inc. -- Appboy is the leading platform for marketing automation for apps. The company’s suite of services empower mobile brands to manage the customer lifecycle beyond the download.

More Posts - Website

Follow Me:

Posted by & filed under Company, Features, ObjectRocket Features.

Today, we’re excited to announce a new addition to the ObjectRocket platform - ObjectRocket for Redis. We love Redis at ObjectRocket - Redis is built for high performance, has versatile data structures and great documentation allowing developers to easily integrate Redis into highly scalable application stacks. We use it internally and so do many of our customers who have been pushing us hard to release a Redis Database-as–a–Service offering.
We built the service with many of the core features that customers have come to expect from ObjectRocket for MongoDB:

  • All instances are highly available with automatic failover of the Redis master to a replica in the event of a master node failure.
  • We built ObjectRocket for Redis on our own high performance infrastructure using containers to eliminate the noisy neighbor problems of traditional hardware virtualization and make Redis run as fast as possible. Also, we control the entire stack so we have more room for innovation and this also gives us far greater control if there is a problem.
  • ACLs – we embrace a secure-by-default approach at ObjectRocket and require network Access Control List (ACL) entries for every instance.
  • Free backups – we take snapshots of your data to insure you against data loss.

Customers can also focus on their business while feeling secure in the knowledge that ObjectRocket for Redis is backed by Redis Specialists 24/7/365. This is not just marketing speak – Rackspace, our parent company, also owns RedisToGo ( and we have vast experience managing and supporting over 42,000 running Redis instances.

Redis is currently available in our Virginia region, and will become available in more regions throughout August. ObjectRocket for Redis servers have high bandwidth and directly peer with networks like AWS so we’re only a few milliseconds away from your app servers, no matter where they run.

Over the coming months we will continue to release new features and functionality of the product. As always, please don’t be shy about giving us feedback.

Matthew Barker

Matthew Barker is a Product Manager on the Database team at Rackspace -- Overseeing ObjectRocket for Redis, a high performance, highly available & fully managed Redis datastore service & RedisToGo, a leading managed Redis datastore service.

More Posts - Website

Follow Me:

Posted by & filed under HowTo.

At MongoDB World last month MongoDB founder and CTO Eliot Horowitz announced support for pluggable storage engines scheduled for the 2.8 release. This is exciting stuff as it means mongo users will now be able to choose a storage engine that best suits their workload and with the API planned to have full support of All MongoDB features, while not having to give up any of the current functionality that they enjoy. Not only that, but nodes in the same replica set will be able to use different storage engines, enabling all sorts of interesting configurations for varying needs.

The great thing about MongoDB being fully open source is that we don’t have to wait until 2.8 is actually released to play around with these very experimental features. The entirety of the MongoDB source code can be cloned from github and compiled to include any experimental features currently being worked on.

In the example below I’ll show you how to build mongo with the rocksdb example storage engine presented at MongoDB world.

Starting from a freshly installed CentOS 6.5 cloud instance, we’ll grab the basic dependancies:

yum groupinstall 'Development Tools'; yum install git glibc-devel scons

Next will get the MongoDB source code from github:

git clone

Now all that’s left is to compile the source with RocksDB support enabled:

scons --rocksdb=ROCKSDB mongo mongod

Or speed it up by using the -j option to specify the number of parallel jobs to use, if you plan to dedicate the system your compiling on for the time being a good indicator is the number of cores in your machine +1, mine looked like:

scons -j 17 --rocksdb=ROCKSDB mongo mongod

It’s worth noting that pluggable storage engine support and the RocksDB engine are completely experimental at this point so there’s a good chance you’ll encounter errors and be unable to compile from master, that’s to be expected at this stage. If you’d like to keep an eye on how things are progressing the MongoDB dev mailing list is a good place to start.

Once the compile has finished you’ll want to start up a mongod process using the new –storageEngine parameter:

./mongod --storageEngine rocksExperiment

And finally you can test everything by connecting and inserting a simple document, then using db.stats(). You should see RocksDB statistics piped back to you if everything has gone as planned.

As you can see it’s fairly simple to get up and running with experimental features enabled. I’m very excited to see the pluggable storage engine code progress and see more new engines announced as we get closer to the 2.8 release.

Masen Marshall

Technical Lead at ObjectRocket, Chief Helper, Hacker, and Writer.

More Posts

Posted by & filed under HowTo.

MongoDB Inc. has introduced lots of great new enterprise features with release 2.6 of MongoDB, however, one thing still absent is a desktop application to manage your database. Introducing Robomongo, the cross-platform and open source MongoDB management tool. With the following instructions you’ll see how easy it is to integrate RoboMongo with your ObjectRocket MongoDB instance.

Let’s get started! First we’re going to need to note down some details from the ObjectRocket control panel:

  • Database connect string (note that the port is different for SSL vs non-SSL connections)
  • Database username & password

Download and install Robomongo for your OS of choice (at the time of writing the most current version is 0.8.4, which is the release I’m basing these instructions on).

Now open Robomongo. Initially you’ll be greeted with the MongoDB Connections box, click the Create link in the top left of the screen.

MongoDB Connections screen


After clicking the Create link above, you’ll see the following Connection Setting screen. I’ve named my instance ObjectRocket but you may want to use more specific naming if you have several databases.


In the Address field, enter the database connect string you noted down earlier. Remember that if you intend to connect via SSL, the target port will be different. Usually this is your <plain text port> + 10000, so for my example the plain text port is 23042 and the SSL port is 33042.


Connection Settings


Now select the authentication tab and add the user credentials you noted down earlier.

Authentication Tab

If you prefer to use SSL, select the SSL tab at the top and tick Use SSL Protocol. ObjectRocket doesn’t currently support SSL Certificates so disregard that box.

SSL Settings Now press Test to confirm the settings are correct. If everything works you should see a Diagnostic message box similar to below.


Diagonostic Message

Press Save to store your connection. Congratulations, you have successfully connected a great desktop MongoDB management application to your ObjectRocket instance!

But what if you’re using strict ACLs and you work from several locations or your home broadband does not have a static IP? You will have to keep adding your local (changing) public IP address to your instance ACLs in the ObjectRocket control panel before you can work with Robomongo.

Another method is to configure Robomongo to connect to your instance via a (Linux) server with a static IP (for example: one of your application servers, or a cloud server created to act as a proxy) using a SSH tunnel. The following instructions will guide you through the process.


First create yourself a user on a Linux server that has a static public IP. If this is not a server that is already allowed access via your ACL rule set, then remember to add this server’s IP address to your instance ACLs.


Generate a SSH public/private key pair and install the public part to the Linux server that will be our proxy host, an excellent article on how to configure SSH keys can be found here.


Now configure Robomongo to use our SSH proxy host and key.

SSH Connection Settings

Test your connection again, if the test completes without error press save to store your connection settings. You have successfully configured Robomongo to access your ObjectRocket instance via a proxy host over SSH.




Posted by & filed under HowTo.

JSONStudio and ObjectRocket, A match made in Java.


If you have ever worked with MySQL then you have probably used tools like PHPMyAdmin or MySQL Workbench to interface with the database and run ad-hoc queries or generate reports.  These tools have been around for a long time and have matured over time to become valuable tools for the day to day interaction with MySQL.  If you have ever searched for similar products for MongoDB then you should definitely take a look at JSONStudio by jSonar Inc.  It is a web based front end to interact with any MongoDB implementation and offers features like query generation, reporting and even data visualization.  JSONStudio is not just one tool but actually a suite of many different tools under one unified dashboard and I must say it’s list of features are impressive.  The best part about this suite of tools is that it interfaces seamlessly with any ObjectRocket MongoDB instance.


To get started, head on over to and download the free evaluation copy of the tool.  I installed the version for Mac OS X but if you have Linux or Window those packages are listed as well.  The installation guide can be found by hovering Resources in the navigation bar and selecting Guide.  This will take you to the documentation for the current version of the software.


Once you have the software installed and the web service up and running you should get to a screen that looks something like this:


JSONStudio connection pageAll the details you need to hook this up to an ObjectRocket instance can be found in the ObjectRocket Control Panel.  First log in to with your ObjectRocket username and password.  Once authenticated you should see a list of instances like so:



I am going to be connecting to my JSONStudio instance and looking specifically at my JSONTest database.  To get those connection details I first will click on my JSONStudio instance and then select the JSONTest database in the Databases section of my Instance Details page:

I then will need the SSL Connect String and a username from the Users section:


With the connection details in hand we can now connect JSONStudio to the instance.  Fill in the relevant information into the login page like so:

Since I am connecting over SSL I need to check the use SSL box as this passes the correct flag to the driver under the hood to make a secure connection.  I also chose the Secondary Preferred option so that my search queries will favor secondaries instead of the primary.  This can help with performance if the primary is under a heavy write load, but be aware, as mentioned in the MongoDB documentation, reading from secondaries can return stale data in certain circumstances.  Another thing to note is I selected to save the information I just entered such that I can quickly connect back another time.  When you save the datasource it does not save the password, so you will have to type that every time.


Once you hit Login you should see a screen very similar to this:



That should get you started with using JSONStudio by jSonar Inc. with your ObjectRocket MongoDB Instance.  If you run into any issues connecting to your instance please email and we will be more than happy to help you get connected.  Happy querying!