Some notes on MaxTimeMS behaviour as at MongoDB 2.6:
MaxTimeMS sets a cumulative time limit in milliseconds for processing operations on a cursor.
Network latency and idle time for a cursor do not count toward MaxTimeMS.
After the MaxTimeMS is reached, the operation will be killed at the next safe interrupt point (for example, when that operation yields). This means that an operation can exceed MaxTimeMS if it is not currently interruptible.
From your profile output the query held a read lock for 334 microseconds and yielded once, so it would appear to be well below the MaxTimeMS threshold. Effectively, this isn't an inherently "slow query" to execute but it is likely being affected by other activity on your system.
A properly indexed query returning a single result is unlikely to be slow; the MaxTimeMS option is much more useful for a poorly indexed query or a long running query that returns many results.
For a forced example of a slow executing query to test MaxTimeMS you could use a JavaScript function, eg:
// Add some test documents in the collection
db.mycoll.insert([{sleepy:1}, {tired:1}, {zzz:1}, {xyzzy:1}])
// A query that sleeps for 1s on each document found
db.mycoll.find({$where: "sleep(1000) || true"}).maxTimeMS(50)
I would suggest looking at the query performance in aggregate as there are should be other performance metrics that correlate. You should ideally track your MongoDB metrics over time using a monitoring system. For monitoring suggestions, see Monitoring for MongoDB in the manual.
Asya's presentation on Diagnostics & Debugging also provides a good overview of performance troubleshooting tools and how they were used to solve some specific performance scenarios.