On Sun, 16 Jul 2017 12:00:27 +0000, Rich wrote:
Post by Rich
It is possible, and even tempting, to view a program as an abstract
mechanism, as a device of some sort. To do so, however, is highly
dangerous: the analogy is too shallow because a program is, as a
mechanism, totally different from all the familiar analogue devices we
grew up with. Like all digitally encoded information, it has
unavoidably the uncomfortable property that the smallest possible
perturbations -- i.e. changes of a single bit -- can have the most
drastic consequences. [For the sake of completeness I add that the
picture is not essentially changed by the introduction of redundancy
or error correction.] In the discrete world of computing, there is no
meaningful metric in which "small" changes and "small" effects go hand
in hand, and there never will be.
The moment a computational method is applied to any analogue problem,
this problem of "The consequences of small changes on outcomes" is an
ever present hazard which pre-dates even Babbage's Difference Engine. All
that's required is a willing computor (a human capable of doing long
division using nothing more than a writing implement and a recording
surface - paper, wax or clay tablet or what have you).
Post by Rich
Followed by Walter Maner in "Is Computer Ethics Unique?"
EXAMPLE 7: Uniquely Discrete
In a stimulating paper "On the Cruelty of Really Teaching Computer
Science,"Edsger Dijkstra examines the implications of one central,
controlling assumption: that computers are radically novel in the
history of the world. Given this assumption, it follows that
programming these unique machines will be radically different from
other practical intellectual activities. This, Dijkstra believes, is
because the assumption of continuity we make about the behavior of
most materials and artifacts does not hold for computer systems. For
most things, small changes lead to small effects, larger changes to
proportionately larger effects. If I nudge the accelerator pedal a
little closer to the floor, the vehicle moves a little faster. If I
press the pedal hard to the floor, it moves a lot faster. As machines
go, computers are very different.
A program is, as a mechanism, totally different from all the familiar
analogue devices we grew up with. Like all digitally encoded
information, it has, unavoidably, the uncomfortable property that the
smallest possible perturbations -- i.e., changes of a single bit --
can have the most drastic consequences.
This essential and unique property of digital computers leads to a
specific set of problems that gives rise to a unique ethical
difficulty, at least for those who espouse a consequentialist view of
For an example of the kind of problem where small "perturbations" have
drastic consequences, consider the Mariner 18 mission, where the
absence of the single word NOT from one line of a large program caused
an abort. In a similar case, it was a missing hyphen in the guidance
program for an Atlas-Agena rocket that made it necessary for
controllers to destroy a Venus probe worth $18.5 million. It was a
single character omitted from a reconfiguration command that caused
the Soviet Phobos 1 Mars probe to tumble helplessly in space. I am
not suggesting that rockets rarely failed before they were
computerized. I assume the opposite is true, that in the past they
were far more susceptible to certain classes of failure than they are
today. This does not mean that the German V-2 rocket, for example,
can provide a satisfactory non-computer (or pre-computer) moral
analogy. The behavior of the V-2, being an analog device, was a
continuous function of all its parameters. It failed the way analog
devices typically fail -- localized failures for localized problems.
Once rockets were controlled by computer software, however, they
became vulnerable to additional failure modes that could be extremely
generalized even for extremely localized problems.
Those are all examples of 'Human Error'. Replacing the digital computer
with a human pilot who hasn't been comprehensively trained to deal with
*every possible* contingency in the class of contingencies that can be
handled merely by taking the appropriate controlling actions can produce
the same failures. In short, the problem is bad or inadequate programming
which applies equally to biological neural processing systems.
Post by Rich
"In the discrete world of computing," Dijkstra concludes, "there is no
meaningful metric in which 'small' change and 'small' effects go hand
in hand, and there never will be." This discontinuous and
disproportionate connection between cause and effect is unique to
digital computers and creates a special difficulty for
Excuse me! "unique to digital computers"? Really? That's a rather
pompous (and extremely narrow) statement to make about a more generalised
problem of data processing which the natural world has been dealing with
during an ongoing R&D period in excess of half a billion years and still
Oh! The hubris of this "Johnny-come-lately" who believes the problem
revolves entirely around his own specialist subject of artificially
created digital computing machines. He may be trying to teach programmers
how to be 'better programmers' but ignoring the true origins of a problem
in creating 'flawless software solutions' which pre-date electronic
digital computers by more than half a billion years only does his own
cause a severe disservice.
Post by Rich
consequentialist theories. The decision procedure commonly followed
by utilitarians (a type of consequentialist) requires them to predict
alternative consequences for the alternative actions available to them
in a particular situation. An act is good if it produces good
consequences, or at least a net excess of good consequences over bad.
The fundamental difficulty utilitarians face, if Dijkstra is right, is
that the normally predictable linkage between acts and their effects
is severely skewed by the infusion of computing technology. In short,
we simply cannot tell what effects our actions will have on computers
by analogy to the effects our actions have on other machines.
I wonder how that is different from the situation where the interaction
is between one human being and another?
One does not need computer technology to produce wildly unpredictable
outcomes. Purely analogue systems are more than capable of producing such
results, witness the demonstration using a steel ball hanging off the end
of a stiff wire spar suspended by a uni-pivot joint over 2 magnets stuck
onto a base plate a few millimetres below the reach of the steel ball as
stand ins for two of the three bodies in the classic "Three Body Orbital
Mechanics Problem". There's plenty of chaotic behaviour in a system as
simple as this (and it gets even more 'interesting' with each additional
Then there's the question of 'Black Holes' which was described by, I
believe, Albert Einstein (or maybe one of his contemporaries) as places
in the Universe where God had introduced a "Divide by Zero Error" on
account of it spoiling his 'pretty' theory of General Relativity as a
means to fully describe the Universe and all of its behaviours.
The only computers that were harmed by these problems were solely the
human computors - the mathematicians. As has been stated by another (I
think I'm paraphrasing it), "A computer is just a faster way to make
mistakes". More accurately, a computer is a means of speeding up the
detection of errors in mathematical formulae typically used to model the
behaviour of real world systems (natural or artificial).
As electronic digital computers get faster and faster the rate of
production of 'garbage' on its output ports increases as we feed
'garbage' data into its input ports when satisfying the famous GIGO
condition. In this case, the GI portion isn't just limited to test data,
it also includes the program instructions (which after all, are merely
just another set of 'data' that happens to have a special status).
If those space probe delivery systems had used the "Triple Redundancy
System" technique now in common use by modern passenger carrying air
liners as part of their Fly-By-Wire computerised control systems where
the final control outputs are the result of a 3 way vote by 3
independently manufactured controllers each independently programmed to
perform their nominally identical functions, NASA may well have been able
to avoid many of those embarrassing 'accidents'.
However, at that time, the "State of The Art" in lightweight, "Radiation
Hardened" computerised control systems meant they couldn't afford the
weight penalty of tripling up their on-board computerised controllers and
the additional I/O hardware mandated by such a 3 way system.
Of course, these days, the advances in miniaturisation and the several
orders of magnitude improvements in the power to weight ratio of modern
SoC digital processor kit where Intel have been incorporating their IAM
'Processor within a Processor' technology that occupies less space than a
micro-dot on the main CPU die (and more than capable of out-classing a
ZX80 'computer' that, in principle "Could be used to control a nuclear
power station" :-) ), would be the blindingly obvious solution to most of
Edsger Dijkstra's problems today.
There's nothing intrinsically wrong about "Brute Forcing" a solution to
such problems. After all, it seems to work just fine in the Natural World
of Biological neural networking systems, aka "Brains". :-)
Johnny B Good