11

IMHO binding a variable to another variable or an expression is a very common scenario in mathematics. In fact, in the beginning, many students think the assignment operator(=) is some kind of binding. But in most of the languages, binding is not supported as a native feature. In some languages like C#, binding is supported in some cases with some conditions fulfilled.

But IMHO implementing this as a native feature was as simple as changing the following code-

int a,b,sum;
sum := a + b;
a = 10;
b = 20;
a++;

to this-

int a,b,sum;
a = 10;
sum = a + b;
b = 20;
sum = a + b;
a++;
sum = a + b;

Meaning placing the binding instruction as assignments after every instruction changing values of any of the variable contained in the expression at right side. After this, trimming redundant instructions (or optimization in assembly after compilation) will do.

So, why it is not supported natively in most of the languages. Specially in the C-family of languages?

Update:

From different opinions, I think I should define this proposed "binding" more precisely-

  • This is one way binding. Only sum is bound to a+b, not the vice versa.
  • The scope of the binding is local.
  • Once the binding is established, it cannot be changed. Meaning, once sum is bound to a+b, sum will always be a+b.

Hope the idea is clearer now.

Update 2:

I just wanted this P# feature. Hope it will be there in future.

Gulshan
  • 9,532

6 Answers6

9

You're confusing programming with math. Not even functional programming is entirely math, even though it borrows many ideas and turns them into something that can be executed and used for programming. Imperative programming (which includes most C-inspired languages, notable exceptions being JavaScript and the more recent additons to C#) has nearly nothing to do with math, so why should these variables behave like variables in math?

You have to consider that this is not always what you want. So many people are bitten by closures created in loops specifically because closures keep the variable, not a copy of its value at some point, i.e. for (i = 0; i < 10; i++) { var f = function() { return i; }; /* store f */ } creates ten closures which return 9. So you'd need to support both ways - which means twice the cost on the "complexity budget" and yet another operator. Possibly also incompabilities between code using this and code not using this, unless the type system is sophisticated enough (more complexity!).

Also, implementing this efficently is very hard. The naive implementation adds a constant overhead to every assignment, which can add up quickly in imperative programs. Other implementations might delay updates until the variable is read, but that's significantly more complex and still has overhead even when the variable is never read again. A sufficently smart compiler can optimize both, but sufficently smart compilers are rare and take much effort to create (note that it's not always as simple as in your example, especially when the variables have broad scope and multithreading comes into play!).

Note that reactive programming is basically about this (as far as I can tell), so it does exist. It's just not that common in traditional programming languages. And I bet some of the implementations problems I listed in the previous paragraph are solved.

3

I think what you're describing is called a Spreadsheet:

A1=5
B1=A1+1
A1=6

...then evaluating B1 returns 7.

EDIT

The C language is sometimes called "portable assembly". It's an imperative language, whereas spreadsheets, etc., are declarative languages. Saying B1=A1+1 and expecting B1 to re-evaluate when you change A1 is definitely declarative. Declarative languages (of which functional languages are a subset) are generally considered higher level languages, because they're farther away from how the hardware works.

On a related note, automation languages like ladder logic are typically declarative. If you write a rung of logic that says output A = input B OR input C it's going to re-evaluate that statement constantly, and A can change whenever B or C changes. Other automation languages like Function Block Diagram (which you might be familiar with if you've used Simulink) are also declarative, and execute continually.

Some (embedded) automation equipment is programmed in C, and if it's a real-time system, it probably has an infinite loop that re-executes the logic over and over, similar to how ladder logic executes. In that case, inside your main loop you could write:

A = B || C;

...and since it's executing all the time, it becomes declarative. A will constantly be re-evaluated.

3

C, C++, Objective-C:

Blocks provide the binding feature that you're looking for.

In your example:

sum := a + b;

you're setting sum to the expression a + b in a context where a and b are existing variables. You can do exactly that with a "block" (a.k.a. closure, a.k.a. lambda expression) in C, C++, or Objective-C with Apple's extensions (pdf):

__block int a = 0, b = 0;           // declare a and b
int (^sum)(void);                   // declare sum
sum = ^(void){return a + b;};       // sum := a + b

This sets sum to a block that returns the sum of a and b. The __block storage class specifier indicates that a and b may change. Given the above, we can run the following code:

printf("a=%d\t b=%d\t sum=%d\n", a, b, sum());
a = 10;
printf("a=%d\t b=%d\t sum=%d\n", a, b, sum());
b = 32;
printf("a=%d\t b=%d\t sum=%d\n", a, b, sum());
a++;
printf("a=%d\t b=%d\t sum=%d\n", a, b, sum());

and get the output:

a=0      b=0     sum=0
a=10     b=0     sum=10
a=10     b=32    sum=42
a=11     b=32    sum=43

The only difference between using a block and the "binding" that you propose is the empty pair of parentheses in sum(). The difference between sum and sum() is the difference between an expression and the result of that expression. Note that, as with functions, the parentheses don't have to be empty -- blocks can take parameters just as functions do.

Caleb
  • 39,298
3

It fits very poorly with most models of programming. It would represent a kind of completely uncontrolled action-at-a-distance, in which one could destroy the value of hundreds or thousands of variables and object fields by making a single assignment.

jprete
  • 1,519
3

Ya' know, I have this nagging gut feeling that reactive programming might be cool in a Web2.0 environment. Why the feeling? Well, I have this one page that's mostly a table that changes all the time in response to table-cell onClick events. And cell clicks often mean changing the class of all cells in a col or row; and that means endless loops of getRefToDiv( ), and the like, to find other related cells.

IOW, many of the ~3000 lines of JavaScript I've written do nothing but locate objects. Maybe reactive programming could do all that at small cost; and at a huge reduction in lines of code.

What do you guys think about that? Yes, I do notice that my table has a lot of spreadsheet-like features.

Pete Wilson
  • 1,756
2

C++

Updated to be generic. Parameterized on return and input types. Can supply any binary operation satisfying the parameterized types. Code computes the result on demand. It tries to not recompute results if it can get away with it. Take this out if this is undesireable (because of side-effects, because the contained objects are large, because of whatever.)

#include <iostream>

template <class R, class A, class B> class Binding { public: typedef R (*BinOp)(A, B); Binding (A &x, B &y, BinOp op) : op(op) , rx(x) , ry(y) , useCache(false) {} R value () const { if (useCache && x == rx && y == ry) { return cache; } x = rx; y = ry; cache = op(x, y); useCache = true; return cache; } operator R () const { return value(); } private: BinOp op; A &rx; B &ry; mutable A x; mutable B y; mutable R cache; mutable bool useCache; };

int add (int x, int y) { return x + y; }

int main () { int x = 1; int y = 2; Binding<int, int, int> z(x, y, add); x += 55; y *= x; std::cout << (int)z; return 0; }

Thomas Eding
  • 1,072