Why rewrite systems matter
The best way to assess the logical rigor of X is to ask how explicit X’s rewrite system is.
In the chromopill article, I talked about the idea of a rewrite system.
The basic observation is that things with a logical structure all have built-in rewrite systems. They vary in how explicit they are. But invariably, they can be found.
Side note: the best way to assess the “logical rigor” of X is to ask how explicit X’s rewrite system is. If it’s fuzzy, not a tree structure, or nonexistent, X is probably not “rigorous”.
In rewrite systems, high-level concepts or operations are always rewritten in terms of lower-level operations. “Well, duh,” you say. You’d be surprised to what degree this concept is violated. Most of the math you learned in school doesn’t pass this test. If you don’t believe me, tell me what exactly an “angle” is.
QAnal (short for “rational analysis”) distinguishes itself from CIA mathematics in many ways. By far the most important is that QAnal has an explicit rewrite system. At the bottom is integer arithmetic.
Let me give a very concrete example.
Here is QAnal’s code for multiplying rational numbers:
-spec times(q(), q()) -> q().
% @doc product of two rationals
times({q, TL, BL}, {q, TR, BR}) ->
q(TL * TR,
BL * BR).
The patterns TL
, BL
, TR
, and BR
each are integers. The function q
constructs a rational (one of those tuples {q, TopOfTheFraction, BottomOfTheFraction}
.) What this reflects is the standard rule for multiplying fractions that you learned in school. Crucially, the function q
also reduces the fraction, guaranteeing that equal fractions have equal representation. This constraint is extremely important in computing, but is at best an afterthought to the CIA.
Let’s see how q
works
-spec q(z(), z()) -> q().
% @doc form the quotient of two integers
q(_, 0) ->
error(badarg);
q(N, 1) ->
{q, N, 1};
q(Top, Bot) when Bot < 0 ->
q(-1 * Top, -1 * Bot);
q(Top, Bot) when 0 < Bot,
is_integer(Top),
is_integer(Bot) ->
GCD = qx_z:gcd(Top, Bot),
{q, Top div GCD,
Bot div GCD}.
What this means:
you can’t divide by zero
if the bottom of the requested fraction is 1, just construct the fraction as given
if the bottom of the requested fraction is negative, multiply the top and bottom of the requested fraction by -1 and try again
otherwise, divide the top and bottom of the requested fraction by their GCD, and construct that
The second case is redundant, now that I think about it. Eh, doesn’t really matter.
The most interesting code involved here is Euclid’s GCD algorithm
-type z() :: integer().
-type znn() :: non_neg_integer().
-type zp() :: pos_integer().
-spec gcd(z(), z()) -> znn().
% @doc
% Greatest common divisor of two integers
% @end
% make sure both arguments are non-negative
gcd(X, Y) ->
gcdabs(abs(X), abs(Y)).
-spec gcdabs(znn(), znn()) -> znn().
% @private gcd where the arguments can be assumed to be non-negative,
% but not necessarily in the correct order
%
% This function is responsible for making sure the argument on the
% left is bigger than or equal to the argument on the right.
gcdabs(X, Y) when X < Y ->
gcdplus(Y, X);
gcdabs(X, Y) when X >= Y->
gcdplus(X, Y).
-spec gcdplus(znn(), znn()) -> znn().
% @private gcd where the arguments can be assumed to be non-negative,
% and in the correct order
gcdplus(X, 0) ->
X;
% X might be equal to Y here, in which case, modulo(X, Y) = 0, and
% the next call will degenerate into the base case
gcdplus(X, Y) when X >= Y ->
% X = qY + R
% R = X - qY
% 0 =< R < Y
% therefore any common divisor of X and Y (such as the gcd) also
% divides R
% therefore
% gcd(X, Y) ->
% gcd(Y, R)
gcdplus(Y, modulo(X, Y)).
-spec modulo(A :: z(), B :: zp()) -> AModB :: znn().
% @doc Like the rem operator, except this always returns a result
% within 0..B
%
% for simplicity/obviously-correct-behavior, B must be strictly
% positive
modulo(A, B) when is_integer(A),
is_integer(B),
0 < B ->
ARemB = A rem B,
% one of two cases
%
% -B < ARemB < 0 ->
% % this will happen when A < 0
% return ARemB + B
% 0 =< ARemB < B ->
% % this will happen when 0 =< A
% return ARemB
NegRem = ((-1 * B) < ARemB) andalso (ARemB < 0),
PosRem = (0 =< ARemB) andalso (ARemB < B),
if NegRem ->
ARemB + B;
PosRem ->
ARemB
end.
The point of this exercise is to illustrate that even in something simple like “multiply two fractions”, there’s quite a lot of gears that end up getting turned. If you’re interested in why that code makes sense, I recommend my "Math for Programmers" playlist.
I propose the following criteria for well-designed rewrite systems
you don’t need to understand the gears to get tits on the screen;
the gears are discoverable if you are curious;
the structure of the gears makes sense;
the logic of the gear structure is discoverable.
There’s more I can say about this, but I’ll stop here for now.
I have an article in the oven about the difference between “elegance” and “simplexity”. Elegance is a crucial aspect in a rewrite system. For now I’ll leave you with this
Help me understand why this is interesting