Discussion:
Nest smoke alarm shows why IoT has been a disaster so far
(too old to reply)
RS Wood
2017-07-15 16:56:46 UTC
Permalink
Raw Message
The Internet of Things - A Disaster - gekk
https://gekk.info/articles/iot.html

Formatted in plain html - very nice in Lynx.


//--clip
...
This keeps happening. IoT stuff - and other tech products - keep
coming out that throw away everything that came before them, so
if they fall short of their stratospheric goals for
revolutionizing the way we shell peanuts or make shopping lists
or whatever, they end up being completely worthless. If the Nest
Protect had been an ordinary smoke alarm that could also send you
a notice on your phone, then if that latter part didn't work you
could still use it as a smoke alarm. If the Protect had been an
ordinary smoke alarm with an optional voice module, if the voice
module malfunctioned, you could turn it off. Instead these
companies bet the whole farm on a brand-new design and often
close up shop when it fails.
RS Wood
2017-07-15 23:43:46 UTC
Permalink
Raw Message
Post by RS Wood
The Internet of Things - A Disaster - gekk
https://gekk.info/articles/iot.html
Formatted in plain html - very nice in Lynx.
//--clip
...
This keeps happening. IoT stuff - and other tech products - keep coming
out that throw away everything that came before them, so if they fall
short of their stratospheric goals for revolutionizing the way we shell
peanuts or make shopping lists or whatever, they end up being completely
worthless. If the Nest Protect had been an ordinary smoke alarm that
could also send you a notice on your phone, then if that latter part
didn't work you could still use it as a smoke alarm. If the Protect had
been an ordinary smoke alarm with an optional voice module, if the voice
module malfunctioned, you could turn it off. Instead these companies bet
the whole farm on a brand-new design and often close up shop when it
fails.
There's interesting follow-up at Hacker News where a guy who used to work at
Nest points out that a lot of this blogger's conjecture might be true in
general of the sector but was untrue of Nest. Sadly, that invalidates a lot
of his points. His overarching theme remains valid in my opinion though,
which is: if you're going to replace existing, analog systems with
digital/IoT upgrades, they'd damn well better supply equivalent
trustworthiness, as you're introducing systems that generate orders of
magnitude greater risk of error.
Rich
2017-07-16 12:00:27 UTC
Permalink
Raw Message
Post by RS Wood
Post by RS Wood
The Internet of Things - A Disaster - gekk
https://gekk.info/articles/iot.html
Formatted in plain html - very nice in Lynx.
//--clip
^M... This keeps happening. IoT stuff - and other tech products -
keep coming out that throw away everything that came before them, so
if they fall short of their stratospheric goals for revolutionizing
the way we shell peanuts or make shopping lists or whatever, they
end up being completely worthless. If the Nest Protect had been an
ordinary smoke alarm that could also send you a notice on your
phone, then if that latter part didn't work you could still use it
as a smoke alarm. If the Protect had been an ordinary smoke alarm
with an optional voice module, if the voice module malfunctioned,
you could turn it off. Instead these companies bet the whole farm
on a brand-new design and often close up shop when it fails.
There's interesting follow-up at Hacker News where a guy who used to
work at Nest points out that a lot of this blogger's conjecture might
be true in general of the sector but was untrue of Nest. Sadly, that
invalidates a lot of his points. His overarching theme remains valid
in my opinion though, which is: if you're going to replace existing,
analog systems with digital/IoT upgrades, they'd damn well better
supply equivalent trustworthiness, as you're introducing systems that
generate orders of magnitude greater risk of error.
Those introducing the digital replacements for age-old reliable analog
devices would also do well to read Dijkstra's "On the Cruelty of Really
Teaching Computing Science"
(http://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1036.PDF). Esp. this
part:

...

The second radical novelty is that the automatic computer is our
first large-scale digital device. We had a few with a noticeable
discrete component: I just mentioned the cash register and can add
the typewriter with its individual keys: with a single stroke you can
type either a Q or a W but, though their keys are next to each other,
not a mixture of those two letters. But such mechanisms are the
exception, and the vast majority of our mechanisms are viewed as
analogue devices whose behaviour is over a large range a continuous
function of all parameters involved: if we press the point of the
pencil a little bit harder, we get a slightly thicker line, if the
violinist slightly misplaces his finger, he plays slightly out of
tune. To this I should add that, to the extent that we view
ourselves as mechanisms, we view ourselves primarily as analogue
devices: if we push a little harder we expect to do a little better.
Very often the behaviour is not only a continuous but even a
monotonic function: to test whether a hammer suits us over a certain
range of nails, we try it out on the smallest and largest nails of
the range, and if the outcomes of those two experiments are positive,
we are perfectly willing to believe that the hammer will suit us for
all nails in between.

It is possible, and even tempting, to view a program as an abstract
mechanism, as a device of some sort. To do so, however, is highly
dangerous: the analogy is too shallow because a program is, as a
mechanism, totally different from all the familiar analogue devices
we grew up with. Like all digitally encoded information, it has
unavoidably the uncomfortable property that the smallest possible
perturbations -- i.e. changes of a single bit -- can have the most
drastic consequences. [For the sake of completness I add that the
picture is not essentially changed by the introduction of redundancy
or error correction.] In the discrete world of computing, there is no
meaningful metric in which "small" changes and "small" effects go
hand in hand, and there never will be.

...

Follwed by Walter Maner in "Is Computer Ethics Unique?"
(http://www.cs.unm.edu/~pgk/readings/unique-ethics2.pdf):

...

EXAMPLE 7: Uniquely Discrete

In a stimulating paper "On the Cruelty of Really Teaching Computer
Science,"Edsger Dijkstra examines the implications of one central,
controlling assumption: that computers are radically novel in the
history of the world. Given this assumption, it follows that
programming these unique machines will be radically different from
other practical intellectual activities. This, Dijkstra believes, is
because the assumption of continuity we make about the behavior of
most materials and artifacts does not hold for computer systems. For
most things, small changes lead to small effects, larger changes to
proportionately larger effects. If I nudge the accelerator pedal a
little closer to the floor, the vehicle moves a little faster. If I
press the pedal hard to the floor, it moves a lot faster. As
machines go, computers are very different.

A program is, as a mechanism, totally different from all the familiar
analogue devices we grew up with. Like all digitally enc oded
information, it has, unavoidably, the uncomfortable property that the
smallest possible perturbations -- i.e., changes of a single bit --
can have the most drastic consequences.

This essential and unique property of digital computers leads to a
specific set of problems that gives rise to a unique ethical
difficulty, at least for those who espouse a consequentialist view of
ethics.

For an example of the kind of problem where small "perturbations"
have drastic consequences, consider the Mariner 18 mission, where the
absence of the single word NOT from one line of a large program
caused an abort. In a similar case, it was a missing hyphen in the
guidance program for an Atlas-Agena rocket that made it necessary for
controllers to destroy a Venus probe worth $18.5 million. It was a
single character omitted from a reconfiguration command that caused
the Soviet Phobos 1 Mars probe to tumble helplessly in space. I am
not suggesting that rockets rarely failed before they were
computerized. I assume the opposite is true, that in the past they
were far more susceptible to certain classes of failure than they are
today. This does not mean that the German V-2 rocket, for example,
can provide a satisfactory non-computer (or pre-computer) moral
analogy. The behavior of the V-2, being an analog device, was a
continuous function of all its parameters. It failed the way analog
devices typically fail -- localized failures for localized problems.
Once rockets were controlled by computer software, however, they
became vulnerable to additional failure modes that could be extremely
generalized even for extremely localized problems.

"In the discrete world of computing," Dijkstra concludes, "there is
no meaningful metric in which 'small' change and 'small' effects go
hand in hand, and there never will be." This discontinuous and
disproportionate connection between cause and effect is unique to
digital computers and creates a special difficulty for
consequentialist theories. The decision procedure commonly followed
by utilitarians (a type of consequentialist) requires them to predict
alternative consequences for the alternative actions available to
them in a particular situation. An act is good if it produces good
consequences, or at least a net excess of good consequences over bad.
The fundamental difficulty utilitarians face, if Dijkstra is right,
is that the normally predictable linkage between acts and their
effects is severely skewed by the infusion of computing technology.
In short, we simply cannot tell what effects our actions will have on
computers by analogy to the effects our actions have on other
machines.

...
Johnny B Good
2017-07-16 17:06:42 UTC
Permalink
Raw Message
On Sun, 16 Jul 2017 12:00:27 +0000, Rich wrote:

====snip====
Post by Rich
It is possible, and even tempting, to view a program as an abstract
mechanism, as a device of some sort. To do so, however, is highly
dangerous: the analogy is too shallow because a program is, as a
mechanism, totally different from all the familiar analogue devices we
grew up with. Like all digitally encoded information, it has
unavoidably the uncomfortable property that the smallest possible
perturbations -- i.e. changes of a single bit -- can have the most
drastic consequences. [For the sake of completeness I add that the
picture is not essentially changed by the introduction of redundancy
or error correction.] In the discrete world of computing, there is no
meaningful metric in which "small" changes and "small" effects go hand
in hand, and there never will be.
The moment a computational method is applied to any analogue problem,
this problem of "The consequences of small changes on outcomes" is an
ever present hazard which pre-dates even Babbage's Difference Engine. All
that's required is a willing computor (a human capable of doing long
division using nothing more than a writing implement and a recording
surface - paper, wax or clay tablet or what have you).
Post by Rich
Followed by Walter Maner in "Is Computer Ethics Unique?"
...
EXAMPLE 7: Uniquely Discrete
In a stimulating paper "On the Cruelty of Really Teaching Computer
Science,"Edsger Dijkstra examines the implications of one central,
controlling assumption: that computers are radically novel in the
history of the world. Given this assumption, it follows that
programming these unique machines will be radically different from
other practical intellectual activities. This, Dijkstra believes, is
because the assumption of continuity we make about the behavior of
most materials and artifacts does not hold for computer systems. For
most things, small changes lead to small effects, larger changes to
proportionately larger effects. If I nudge the accelerator pedal a
little closer to the floor, the vehicle moves a little faster. If I
press the pedal hard to the floor, it moves a lot faster. As machines
go, computers are very different.
A program is, as a mechanism, totally different from all the familiar
analogue devices we grew up with. Like all digitally encoded
information, it has, unavoidably, the uncomfortable property that the
smallest possible perturbations -- i.e., changes of a single bit --
can have the most drastic consequences.
This essential and unique property of digital computers leads to a
specific set of problems that gives rise to a unique ethical
difficulty, at least for those who espouse a consequentialist view of
ethics.
For an example of the kind of problem where small "perturbations" have
drastic consequences, consider the Mariner 18 mission, where the
absence of the single word NOT from one line of a large program caused
an abort. In a similar case, it was a missing hyphen in the guidance
program for an Atlas-Agena rocket that made it necessary for
controllers to destroy a Venus probe worth $18.5 million. It was a
single character omitted from a reconfiguration command that caused
the Soviet Phobos 1 Mars probe to tumble helplessly in space. I am
not suggesting that rockets rarely failed before they were
computerized. I assume the opposite is true, that in the past they
were far more susceptible to certain classes of failure than they are
today. This does not mean that the German V-2 rocket, for example,
can provide a satisfactory non-computer (or pre-computer) moral
analogy. The behavior of the V-2, being an analog device, was a
continuous function of all its parameters. It failed the way analog
devices typically fail -- localized failures for localized problems.
Once rockets were controlled by computer software, however, they
became vulnerable to additional failure modes that could be extremely
generalized even for extremely localized problems.
Those are all examples of 'Human Error'. Replacing the digital computer
with a human pilot who hasn't been comprehensively trained to deal with
*every possible* contingency in the class of contingencies that can be
handled merely by taking the appropriate controlling actions can produce
the same failures. In short, the problem is bad or inadequate programming
which applies equally to biological neural processing systems.
Post by Rich
"In the discrete world of computing," Dijkstra concludes, "there is no
meaningful metric in which 'small' change and 'small' effects go hand
in hand, and there never will be." This discontinuous and
disproportionate connection between cause and effect is unique to
digital computers and creates a special difficulty for
Excuse me! "unique to digital computers"? Really? That's a rather
pompous (and extremely narrow) statement to make about a more generalised
problem of data processing which the natural world has been dealing with
during an ongoing R&D period in excess of half a billion years and still
counting.

Oh! The hubris of this "Johnny-come-lately" who believes the problem
revolves entirely around his own specialist subject of artificially
created digital computing machines. He may be trying to teach programmers
how to be 'better programmers' but ignoring the true origins of a problem
in creating 'flawless software solutions' which pre-date electronic
digital computers by more than half a billion years only does his own
cause a severe disservice.
Post by Rich
consequentialist theories. The decision procedure commonly followed
by utilitarians (a type of consequentialist) requires them to predict
alternative consequences for the alternative actions available to them
in a particular situation. An act is good if it produces good
consequences, or at least a net excess of good consequences over bad.
The fundamental difficulty utilitarians face, if Dijkstra is right, is
that the normally predictable linkage between acts and their effects
is severely skewed by the infusion of computing technology. In short,
we simply cannot tell what effects our actions will have on computers
by analogy to the effects our actions have on other machines.
I wonder how that is different from the situation where the interaction
is between one human being and another?

One does not need computer technology to produce wildly unpredictable
outcomes. Purely analogue systems are more than capable of producing such
results, witness the demonstration using a steel ball hanging off the end
of a stiff wire spar suspended by a uni-pivot joint over 2 magnets stuck
onto a base plate a few millimetres below the reach of the steel ball as
stand ins for two of the three bodies in the classic "Three Body Orbital
Mechanics Problem". There's plenty of chaotic behaviour in a system as
simple as this (and it gets even more 'interesting' with each additional
magnet).

Then there's the question of 'Black Holes' which was described by, I
believe, Albert Einstein (or maybe one of his contemporaries) as places
in the Universe where God had introduced a "Divide by Zero Error" on
account of it spoiling his 'pretty' theory of General Relativity as a
means to fully describe the Universe and all of its behaviours.

The only computers that were harmed by these problems were solely the
human computors - the mathematicians. As has been stated by another (I
think I'm paraphrasing it), "A computer is just a faster way to make
mistakes". More accurately, a computer is a means of speeding up the
detection of errors in mathematical formulae typically used to model the
behaviour of real world systems (natural or artificial).

As electronic digital computers get faster and faster the rate of
production of 'garbage' on its output ports increases as we feed
'garbage' data into its input ports when satisfying the famous GIGO
condition. In this case, the GI portion isn't just limited to test data,
it also includes the program instructions (which after all, are merely
just another set of 'data' that happens to have a special status).

If those space probe delivery systems had used the "Triple Redundancy
System" technique now in common use by modern passenger carrying air
liners as part of their Fly-By-Wire computerised control systems where
the final control outputs are the result of a 3 way vote by 3
independently manufactured controllers each independently programmed to
perform their nominally identical functions, NASA may well have been able
to avoid many of those embarrassing 'accidents'.

However, at that time, the "State of The Art" in lightweight, "Radiation
Hardened" computerised control systems meant they couldn't afford the
weight penalty of tripling up their on-board computerised controllers and
the additional I/O hardware mandated by such a 3 way system.

Of course, these days, the advances in miniaturisation and the several
orders of magnitude improvements in the power to weight ratio of modern
SoC digital processor kit where Intel have been incorporating their IAM
'Processor within a Processor' technology that occupies less space than a
micro-dot on the main CPU die (and more than capable of out-classing a
ZX80 'computer' that, in principle "Could be used to control a nuclear
power station" :-) ), would be the blindingly obvious solution to most of
Edsger Dijkstra's problems today.

There's nothing intrinsically wrong about "Brute Forcing" a solution to
such problems. After all, it seems to work just fine in the Natural World
of Biological neural networking systems, aka "Brains". :-)
--
Johnny B Good
Loading...