How do I get the current Unix time in milliseconds (i.e number of milliseconds since Unix epoch January 1 1970)?
20 Answers
This:
date +%s
will return the number of seconds since the epoch.
This:
date +%s%N
returns the seconds and current nanoseconds.
So:
date +%s%N | cut -b1-13
will give you the number of milliseconds since the epoch - current seconds plus the left three of the nanoseconds.
and from MikeyB - echo $(($(date +%s%N)/1000000)) (dividing by 1000 only brings to microseconds)
You may simply use %3N to truncate the nanoseconds to the 3 most significant digits (which then are milliseconds):
$ date +%s%3N
1397392146866
This works e.g. on my Kubuntu 12.04 (Precise Pangolin).
But be aware that %N may not be implemented depending on your target system. E.g. tested on an embedded system (buildroot rootfs, compiled using a non-HF ARM cross toolchain) there was no %N:
$ date +%s%3N
1397392146%3N
(And also my (non rooted) Android tablet doesn't have %N.)
- 2,327
- 2,243
date +%N doesn't work on OS X, but you could use one of
- Ruby:
ruby -e 'puts Time.now.to_f' - Python:
python -c 'import time; print(int(time.time() * 1000))' - Node.js:
node -e 'console.log(Date.now())' - PHP:
php -r 'echo microtime(TRUE);' - Elixir:
DateTime.utc_now() |> DateTime.to_unix(:millisecond) - Perl:
perl -e 'use Time::HiRes qw(time); printf "%.3f\n", time;' - The Internet:
wget -qO- http://www.timeapi.org/utc/now?\\s.\\N - or for milliseconds rounded to nearest second
date +%s000
My solution is not the best, but it worked for me:
date +%s000
I just needed to convert a date like 2012-05-05 to milliseconds.
- 2,327
- 307
Just throwing this out there, but I think the correct formula with the division would be:
echo $(($(date +%s%N)/1000000))
- 28,776
If you are looking for a way to display the length of time your script ran, the following will provide a (not completely accurate) result:
As near the beginning of your script as you can, enter the following
basetime=$(date +%s%N)
This'll give you a starting value of something like 1361802943996000000.
At the end of your script, use the following
echo "runtime: $(echo "scale=3;($(date +%s%N) - ${basetime})/(1*10^09)" | bc) seconds"
which will display something like
runtime: 12.383 seconds
Notes:
(1*10^09) can be replaced with 1000000000 if you wish
"scale=3" is a rather rare setting that coerces bc to do what you want. There are lots more!
I only tested this on Windows 7/MinGW... I don't have a proper *nix box at hand.
- 2,327
- 81
For the people that suggest running external programs to get the milliseconds... at that rate, you might as well do this:
wget -qO- http://www.timeapi.org/utc/now?\\s.\\N
Point being: before picking any answer from here, please keep in mind that not all programs will run under one whole second. Measure!
- 385
Another solution for MacOS: GNU Coreutils
I have noticed that the MacOS' version of the date command is not interpreting the %N format sequence as nanoseconds but simply prints N to the output when I started using my .bashrc script from Linux, that's using it to measure how long executed commands run, on a MacOS machine.
After a little bit of research, I have learned that only the GNU date from the GNU Coreutils package does support milliseconds. Fortunately, it's pretty easy to install it on MacOS using Homebrew:
brew install coreutils
Since that package contains executables that are already present on MacOS, Coreutils' executables will be installed with a g prefix, so date will be available as gdate.
See for example this page for further details.
Additional solution since this question was originally asked in 2014. Bash 5.0 (released 2019) introduced two new built in variables
EPOCHREALTIME - The number of seconds since the Unix Epoch as a floating point value with micro-second granularity
EPOCHSECONDS - The number of seconds since the Unix Epoch
To obtain milliseconds since the Epoch, you can truncate the last three digits of microseconds to get milliseconds with ${EPOCHREALTIME::-3} and avoid the (expensive) call to date or other external programs.
- 151
https://github.com/ysoftwareab/nanoseconds
I've just created this cross-platform project via golang to output nanoseconds since Unix epoch. As simple as that. Download your-platform-of-choice executable from a Github release.
Getting granular timestamps is important for benchmarks (e.g. OpenTelemetry). Depending on GNU/coreutils/insert-programming-language is in many cases a no go.
- 121
Here is how to get time in milliseconds without performing division. Maybe it's faster...
# test=`date +%s%N`
# testnum=${#test}
# echo ${test:0:$testnum-6}
1297327781715
Update: Another alternative in pure Bash that works only with Bash 4.2+ is the same as above, but use printf to get the date. It will definitely be faster, because no processes are forked off the main one.
printf -v test '%(%s%N)T' -1
testnum=${#test}
echo ${test:0:$testnum-6}
Another catch here though is that your strftime implementation should support %s and %N which is not the case on my test machine. See man strftime for supported options. Also see man bash to see printf syntax. -1 and -2 are special values for time.
- 2,327
- 1,247
(repeat from previous answers) date +%N doesn't work on OS X, but you could also use:
Perl (requires Time::Format module). Perhaps it is not the best CPAN module to use, but it gets the job done. Time::Format is generally made available with distributions.
perl -w -e'use Time::Format; printf STDOUT ("%s.%s\n", time, $time{"mmm"})'
- 2,327
- 21
The most accurate timestamp we can get for Mac OS X is probably this:
python3 -c 'import datetime; print(datetime.datetime.now().strftime("%s.%f"))'
1490665305.021699
But we need to keep in mind that it takes around 30 milliseconds to run. We can cut it to the scale of 2 digits fraction, and at the very beginning compute the average overhead of reading the time, and then remove it off the measurement. Here is an example:
function getTimestamp {
echo `python -c 'import datetime; print datetime.datetime.now().strftime("%s.%f")' | cut -b1-13`
}
function getDiff {
echo "$2-$1-$MeasuringCost" | bc
}
prev_a=`getTimestamp`
acc=0
ITERATIONS=30
for i in `seq 1 $ITERATIONS`;do
#echo -n $i
a=`getTimestamp`
#echo -n " $a"
b=`echo "$a-$prev_a" | bc`
prev_a=$a
#echo " diff=$b"
acc=`echo "$acc+$b" | bc`
done
MeasuringCost=`echo "scale=2; $acc/$ITERATIONS" | bc`
echo "average: $MeasuringCost sec"
t1=`getTimestamp`
sleep 2
t2=`getTimestamp`
echo "measured seconds: `getDiff $t1 $t2`"
You can uncomment the echo commands to see better how it works.
The results for this script are usually one of these 3 results:
measured seconds: 1.99
measured seconds: 2.00
measured seconds: 2.01
- 103
- 501
For Alpine Linux (many Docker images) and possibly other minimal Linux environments, you can abuse adjtimex:
adjtimex | awk '/(time.tv_usec):/ { printf("%06d\n", $2) }' | head -c3
adjtimex is used to read (and set) kernel time variables. With awk you can get the microseconds, and with head you can use the first 3 digits only.
I have no idea how reliable this command is.
Note: Shamelessly stolen from this answer
- 2,327
- 121
Perl solution
perl -mTime::HiRes -e 'printf "%.0f\n", (Time::HiRes::time() * 1000 )'
Time::HiRes::time() returns a float of the form unixtimeinsec.microseconds
Multiply by 1000 to shift left 3 digits, and output with no decimal digits.
Why not just convert to integer with %d?
Because it'll overflow a signed (or unsigned) integer on a 32 bit OS, such
as our ancient AIX servers.
As others have pointed out, it's a question of portability. The accepted answer works on linux or anything that can run gnu date, but not on several other UNIX flavors. Personally I find our older systems are much more likely to have perl than python, Node.js, Ruby or PHP.
Yes, date +%s%3N (if available) is about 5x faster than perl).
- 11
- 2
Using date and expr can get you there i.e.
X=$(expr \`date +%H\` \\* 3600 + \`date +%M\` \\* 60 + \`date +%S\`)
echo $X
Just expand on it to do whatever you want
I realise this does not give milliseconds since epoch, but it might still be useful as an answer for some of the cases, it all depends on what you need it for really, multiply by 1000 if you need a millisecond number :D
Simplest way would be to make a small executable (from C f.ex.) and make it available to the script.
- 31,086
Putting the previous responses all together, when in OS X,
ProductName: Mac OS X
ProductVersion: 10.11.6
BuildVersion: 15G31+
you can do like
microtime() {
python -c 'import time; print time.time()'
}
compute() {
local START="$(microtime)"
#$1 is command $2 are args
local END="$(microtime)"
DIFF="$(bc <<< "$END - $START")"
echo "$1\t$2\t$DIFF"
}
- 143
Not adding anything revolutionary here over the accepted answer, but just to make it reusable easily for those of you whom are newer to Bash. Note that this example works in OS X and on older Bash which was a requirement for me personally.
nowInMs() {
echo "$(($(date +'%s * 1000 + %-N / 1000000')))"
}
Now you can run
TIMESTAMP="$(nowInMs)";
- 2,327
cat /proc/updatime
will return value in xx.yy [seconds], so just multiply it by 1000 to get it in milliseconds
- 103
If you want a simple shell elapsed computation, this is easy and portable, using Frank Thonig's answer:
now() {
python -c 'import datetime; print datetime.datetime.now().strftime("%s.%f")'
}
seismo:~$ x=`now`
seismo:~$ y=`now`
seismo:~$ bc <<< "$y - $x"
5.212514
- 101