62

In this page, the official RPi3 announcement states:

You’ll need a recent NOOBS or Raspbian image from our downloads page. At launch, we are using the same 32-bit Raspbian userland that we use on other Raspberry Pi devices; over the next few months we will investigate whether there is value in moving to 64-bit mode.

My question is, given that the processor is 64 bits, isn't it obvious that running the OS in 64 bits will be better in every way? What am I missing?

goldilocks
  • 60,325
  • 17
  • 117
  • 234
zundi
  • 735
  • 2
  • 6
  • 10

8 Answers8

67

given that the processor is 64 bits, isn't it obvious that running the OS in 64 bits will be better in every way?

No actually, it's not. In some ways, running a 64 bit operating system could deteriorate the Raspberry Pi's performance.

Benefits of 64 bit:

The two primary benefits of using a 64 bit processor/operating system is that the device can handle more than 4 GB of RAM, and natively handle integers larger than 2^32 without the need for a bignum library.

At the time of writing the Raspberry Pi doesn't have more than 4 GB of RAM (Note: as of Aug. 2020, RPi 4 has from 2 to 8 GB of RAM). At a 1 GB of RAM, you've completely lost the first of the two primary benefits. As for the second benefit, what percentage of people are actually using enough giant numbers that it makes sense for the foundation to support a whole second operating system? As is, the RPi can use huge numbers through software methods, but it seems like if you're going to be consistently in that realm, you need to be using better hardware anyway.

Problems with 64 bit:

The ability to store a larger number isn't granted by magic. Rather, the size of memory objects needs to be increased. In C (and C++) this means changing an int to int64_t. This isn't done automatically, hence the comments about the foundation not wanting to maintain two branches.

Additionally, many applications simply don't provide a benefit (for most users) when run in 64 bit mode. Notice that most web browsers, MS Office, and a whole host of other popular software is all still shipped and maintained in a 32 bit manner. Sure you can get your hands on an 64 bit release of MS Office, but it's rarely used.

If the application/operating system is written to take advantage of a 64 bit architecture, your application is going to use more memory, simply because variables and pointers are taking up more space. Usually this is a relatively small trade off for machines that will benefit from the perks. In our case, we have very few perks, and very little RAM.

Also of note:

Just because you're running on a 64 bit machine, doesn't mean the application isn't running as 32 bit. Windows makes this very clear by having two different install paths, C:\Program Files and C:\Program Files (x86).

So, will the foundation likely provide 64 bit support?:

We're back at the same point of, "Some people may see benefit, but most will not.". You'll certainly see other projects offering 64 bit builds, but unless the foundation gets a lot of undeserved (imo) flack, they probably won't and shouldn't (imo). Creating and maintaining a separate 64 bit branch isn't a small endeavor, and honestly, just doesn't seem worth it.

Greenonline
  • 2,969
  • 5
  • 27
  • 38
Jacobm001
  • 11,904
  • 7
  • 47
  • 58
23

It's worth noting that the situation is different for ARM and Intel/AMD. That's because the switch to x86_64 was also used as an opportunity to update the badly-aging architecture, basically crippled by only having 8 general-purpose registers — and doubled in 64-bit mode. So, switching an Intel/AMD system to 64-bit mode also means enabling real features which make a significant difference in performance.

ARM doesn't have this problem to begin with (although AArch64 adds registers, the 32-bit architectures weren't starved for them), so the benefits are basically more directly-addressable memory and native big integer support — way less of a big deal, and perhaps counteracted by the downside (more memory used for everything).

(As an aside, for this reason, there has been some work in creating an "x32" abi for Intel/AMD Linux, keeping the CPU enhancements but using 32-bit pointers.)

mattdm
  • 347
  • 2
  • 11
6

I am sure there are already people running Debian Aarch64 (ARMv8) on the Pi 3; it certainly would not be that hard for many people (see here for some clues about that might work)1 although for most users it is probably a bit of a stretch.

However, if Raspbian and/or the Foundation don't come out with a 64-bit version, you will increasingly see people with blogs, etc., explaining how to run one and still get the goodies you need.


There is now a Fedora aarch64 release for the Pi 3.


1. There will be some complications with the 32-bit /opt/vc stuff, I am not sure how surmountable that is; there used to be 32-bit compat libs for x86-64 but Aarch64...maybe not.

goldilocks
  • 60,325
  • 17
  • 117
  • 234
5

As part of the launch publicity I saw it mentioned that one concern is the effort required to maintain two separate code bases (32 and 64 bit). the Adafruit PI3 Launch video also mentioned that the move to a 64bit processor was more about the clock speed increase the new chip provided than about using 64bit mode.

Steve Robillard
  • 34,988
  • 18
  • 106
  • 110
4

Addressing the assertion that the 64 bit native programs are larger (more memory for data and pointers), and that there are no noticeable benefits to a 64 vs. 32 bit OS on ARMv8 with less than 4GB of RAM, I wish to raise a few points.

There are some significant differences in how things are done in ARMv7 (and before) and ARMv8, architecturally, that make the ARMv8 execution more efficient. Some of this is from the wider internal data paths, some is the elimination of special cases, and a much deeper pipeline). These same changes make the ARMv8 better at running ARMv7 (32 bit) code.

Native 64 bit applications do use 64 bit pointers and 'size_t' is 64 bits, so elements using those do get larger. The remainder of data will tend to stay the same size. The significance of this is minor, however, to the size of the executable images.

Where 64 bit native really shines (if you don't care about large integer and floating point stuff) is having a bigger virtual address space:

  • The OS is able to divide the virtual address space into more and larger sections, allowing easier management of shared resources, more streamlined context switches between different levels of privilege, and so on.
  • If you've enabled swapping, you can run more and larger processes, exceeding physical memory limits (this is actually true in 32 bit as well, but you're less limited in 64 bit)

Whether the OS currently takes advantage of this or not, it's going to make a difference as the mainstream moves away from 32 bit.

I think the best argument for moving to a native 64 bit AArch64 kernel is portability: The mainstream desktop has moved to mostly 64 bit processors, and I'm seeing more packages that assume 64 bits, and porting such code back to 32 bits is harder than porting from 32 to 64 bits. In user-space, you're able to run 32 bit applications and 64 bit applications side-by-side, assuming you have installed the multi-arch libraries, so it is not required to port 32 to 64 bit where it doesn't matter. A 64 bit OS is simply going to give you the larger selection of software.

I'm not saying that producing a 64 bit kernel for the Raspberry PI 3 is easy - there are significant differences that require changes at the low level, not all device drivers are 64 bit clean (especially drivers for ARM specific GPUs). It may be that Raspberian will remain a 32 bit OS, but I believe that (in the long range) it is short-sighted.

A single boot media (SD card, for example) can contain both 64 and 32 bit versions of the OS, and the secondary boot software (u-boot, arm-boot, and others) can determine which one to load. The tougher part is userland -- the file system would have to be multi-arch, even on 32 bit systems where the 64 bit stuff will be useless. I would address this with a script or utility that could be run after the initial boot to remove the unneeded libraries and program executables on 32 bit only systems.

4

The 64-bit addressing can be useful even if you don't have more than 1GB of memory.

It allows you to memory-map large files, so you have a pointer and let the OS do the I/O transparently. Just another way of doing I/O. You need a 64-bit addressing to do this on large files.

Another example where I see it can be useful is to allow processes to have more than 2GB of address space, using swap space. I recently had an issue on a 32-bit NAS with lots of storage, and a damaged filesystem. The fsck process ran out of memory, even with the caching options turned on. Adding swap space could not solve the problem, the 32-bit address space was the hard limit there. So there was just no way to run fsck on this large damaged filesystem with a 32-bit binary. With a 64-bit binary and some swap space, it would have run.

GTC
  • 41
  • 2
2

The existing answers cover the problems of a 64-bit arch very well, but I am not seeing many stated advantages of upgrading. So, here's two I have recently discovered:

  • When PHP handles Unix timestamps, the integer size in a 32-bit arch sets an upper limit on dates, such that they cannot go beyond a particular day in 2038. I expect this is an issue for all languages that handle timestamps. (Thankfully, most date handling subsystems that do not use Unix timestamps, such as PHP's DateTime, are designed specifically not to be limited by this problem even on older CPUs).
  • Mongo is limited to databases under 2G in size on this arch, and 32-bit builds are soon to be deprecated. From the manual:

    Starting in MongoDB 3.2, 32-bit binaries are deprecated and will be unavailable in future releases.

    Although the 32-bit builds exist for Linux and Windows, they are unsuitable for production deployments. 32-bit builds also do not support the WiredTiger storage engine.

halfer
  • 121
  • 3
-4

My thoughts on this: Although I don't know exactly how an ARM processor addresses memory, I can tell you this from previous multiple CPU architectures I programmed on (SPARC/Alpha/i386/AMD64/X86_64): when using shared memory and addressing by its "real" virtual address pointer, the move to 64bit is not trivial. Although memcpy does what it's supposed to do, you need to take into consideration that in 64 bits the data is stored like this (bit backwards):

HGFEDCBA
HGFEDCBA
HGFEDCBA

yet in 32 bits it looks like this:

ABCD
ABCD
ABCD

So, in 32bits when you store say a jpeg in RAM, you can read its header bytes, or do edge detection, without any problem in a linear fashion *say by going byte by byte forwards. But in a 64bit architecture this changes:

32bit:

for (i=0; i< img_length/4; i++) 
{ 
    address=shm_start+i; 
    for (c=0; c< 4; c++) 
    { 
        byte=((*address >> c) & 15) 
    } 
}

64bit:

for (i=-; i< img_length/8; i++) 
{ 
    address=shm_start+i; 
    for (c=7; c>=0; c--) 
    { 
        byte=((*address >> c) & 15) 
    } 
}
Jacobm001
  • 11,904
  • 7
  • 47
  • 58
bobx
  • 1