Discussion:
[Link Posting] Conservative web development
Add Reply
Rich
2018-09-05 22:40:40 UTC
Reply
Permalink
Raw Message
####################################################################
# ATTENTION: This post is a reference to a website. The poster of #
# this Usenet article is not the author of the referenced website. #
####################################################################

<URL:https://drewdevault.com/2018/09/04/Conservative-web-development.htm
l>
Today I turned off my ad blocker, enabled JavaScript, opened my network
monitor, and clicked the first link on Hacker News - a New York Times
article. It started by downloading a megabyte of data as it rendered the
page over the course of eight full seconds. The page opens with an
advertisement 281 pixels tall, placed before even the title of the
article. As I scrolled down, more and more requests were made,
downloading a total of 2.8 MB of data with 748 HTTP requests. An article
was weaved between a grand total of 1419 vertical pixels of ad space,
greater than the vertical resolution of my display. Another 153-pixel ad
is shown at the bottom, after the article. Four of the ads were
identical.
I was reminded to subscribe three times, for $1/week (after one year
this would become $3.75/week). One of these reminders attached itself to
the bottom of my screen and followed along as a scrolled. If I scrolled
up, it replaced this with a larger banner, which showed me three other
articles and an ad. I was asked for my email address once, though I
would have had to fill out a captcha to submit it. I took out my phone
and repeated the experiment. It took 15 seconds to load, and I estimate
the ads took up a vertical space equal to 4 times my phone's vertical
resolution, each ad alone taking up half of my screen.
The text of the article is a total of 9037 bytes, including the title,
author, and date. I downloaded the images relevant to the article,
including the 1477x10821 title image. Before I ran them through an
optimizer, they weighed 260 KB; after, 236 KB (using only lossless
optimizations). 8% of the total download was dedicated to the content. 5
discrete external companies were informed of my visit to the page and
given the opportunity to run artibrary JavaScript on it.
If these are the symptoms, what is the cure?
...
Dan Purgert
2018-09-06 01:59:53 UTC
Reply
Permalink
Raw Message
[...]
If these are the symptoms, what is the cure?
Going back to the old ways, {white, green, amber} on black :)

Seriously though, there really needs to be a push to make it so things
like noscript and ublock are exceptions, rather than the norm.
--
|_|O|_| Registered Linux user #585947
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: 05CA 9A50 3F2E 1335 4DC5 4AEE 8E11 DDF3 1279 A281
Ivan Shmakov
2018-09-06 13:10:50 UTC
Reply
Permalink
Raw Message
Post by Dan Purgert
If these are the symptoms, what is the cure?
Going back to the old ways, {white, green, amber} on black :)
FWIW, these days I prefer either gray on blue, or navy on gray.
Post by Dan Purgert
Seriously though, there really needs to be a push to make it so
things like noscript and ublock are exceptions, rather than the norm.
Indeed, there're different ways to improve road safety: starting
from improving roads, as well as traffic signs and road markings,
to improving vehicles and drivers' training, etc.

However, I have to admit that the suggestion that at some point
we may have such a high level of road safety as to entirely
abolish seat belts caught me by surprise.
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Dan Purgert
2018-09-06 13:27:10 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
Post by Dan Purgert
If these are the symptoms, what is the cure?
Going back to the old ways, {white, green, amber} on black :)
FWIW, these days I prefer either gray on blue, or navy on gray.
That works too.
Post by Ivan Shmakov
Post by Dan Purgert
Seriously though, there really needs to be a push to make it so
things like noscript and ublock are exceptions, rather than the norm.
Indeed, there're different ways to improve road safety: starting
from improving roads, as well as traffic signs and road markings,
to improving vehicles and drivers' training, etc.
However, I have to admit that the suggestion that at some point
we may have such a high level of road safety as to entirely
abolish seat belts caught me by surprise.
Er..

Way I see it is that Adblock (etc.) are the equivalent of grabbing a few
Tylenol (or your painkiller of choice). Sure, I've got a bottle of them
in the medecine cabinet, but I'm not taking them for every little ache I
may get.

Same should be for adblockers -- sites that aren't overly egregious in
their display of them (AND the ads aren't being evil in their own
right), fine let them through.
--
|_|O|_| Registered Linux user #585947
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: 05CA 9A50 3F2E 1335 4DC5 4AEE 8E11 DDF3 1279 A281
Richard Kettlewell
2018-09-06 14:46:59 UTC
Reply
Permalink
Raw Message
Post by Dan Purgert
Same should be for adblockers -- sites that aren't overly egregious in
their display of them (AND the ads aren't being evil in their own
right), fine let them through.
You can’t tell in advance if an ad is ‘being evil’ in its own right - ad
networks are, from time to time, compromised and start delivering
malware. Ad blockers are security software and attempts to persuade
users to disable them are social engineering attacks.
--
https://www.greenend.org.uk/rjk/
Huge
2018-09-06 15:29:20 UTC
Reply
Permalink
Raw Message
Post by Richard Kettlewell
Post by Dan Purgert
Same should be for adblockers -- sites that aren't overly egregious in
their display of them (AND the ads aren't being evil in their own
right), fine let them through.
You can’t tell in advance if an ad is ‘being evil’ in its own right - ad
networks are, from time to time, compromised and start delivering
malware. Ad blockers are security software and attempts to persuade
users to disable them are social engineering attacks.
*applause*
--
Today is Prickle-Prickle, the 30th day of Bureaucracy in the YOLD 3184
~ Stercus accidit ~
Richard Kettlewell
2018-09-06 16:38:25 UTC
Reply
Permalink
Raw Message
Post by Richard Kettlewell
Post by Dan Purgert
Same should be for adblockers -- sites that aren't overly egregious in
their display of them (AND the ads aren't being evil in their own
right), fine let them through.
You can’t tell in advance if an ad is ‘being evil’ in its own right - ad
networks are, from time to time, compromised and start delivering
malware. Ad blockers are security software and attempts to persuade
users to disable them are social engineering attacks.
Yeah, I suppose that limiting it to 'trusted' sites (i.e. the ones who
follow some guideline of "ad space vs. content") is fine then.
In terms of using ads to deliver malware; yeah, can't really think of a
way to fix that (beyond some type of static ad I suppose).
There isn’t one. You can have a ‘no JavaScript’ rule but you remain at
risk from vulnerabilties in HTTP parsers, HTML parsers, CSS parsers and
image decoders (and the enforcement of the ‘no JS’ rule could be buggy).

In the long term it’s to be hoped that all of these things will be
reimplemented in memory-safe languages. But we’re not quite there yet,
and that only eliminates certain vulnerability classes anyway.

All of these risks exist in the non-advertizing content that you
actually cared about, too. But you can eliminate a lot of the risk by
eliminating the origins you don’t care about, and adverts are not only
at the top of the list but they are also preferred places for an
attacker to target: if you can crack one ad network, you get to target
the users of every site that uses it.
--
https://www.greenend.org.uk/rjk/
Paul Sture
2018-09-06 16:43:30 UTC
Reply
Permalink
Raw Message
Ad blockers are security software and attempts to persuade users to
disable them are social engineering attacks.
An excellent take on the problem.
--
We will not be enslaved through coercion, but by the lure of convenience.
Jerry Peters
2018-09-06 19:01:44 UTC
Reply
Permalink
Raw Message
Post by Dan Purgert
Same should be for adblockers -- sites that aren't overly egregious in
their display of them (AND the ads aren't being evil in their own
right), fine let them through.
You can?t tell in advance if an ad is ?being evil? in its own right - ad
networks are, from time to time, compromised and start delivering
malware. Ad blockers are security software and attempts to persuade
users to disable them are social engineering attacks.
And IIRC, the NYT was one of the sites where the third party ad
network had been infected and was supplying malware.
Ant
2018-09-06 23:06:25 UTC
Reply
Permalink
Raw Message
Post by Jerry Peters
Post by Dan Purgert
Same should be for adblockers -- sites that aren't overly egregious in
their display of them (AND the ads aren't being evil in their own
right), fine let them through.
You can?t tell in advance if an ad is ?being evil? in its own right - ad
networks are, from time to time, compromised and start delivering
malware. Ad blockers are security software and attempts to persuade
users to disable them are social engineering attacks.
And IIRC, the NYT was one of the sites where the third party ad
network had been infected and was supplying malware.
It seems like every web site that use ads have these infections. :(
--
Quote of the Week: "For example, the tiny ant, a creature of great
industry, drags with its mouth whatever it can, and adds it to the heap
which she is piling up, not unaware nor careless of the future."
--Horace, Satires, Book I, I, 33.
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\Ant(Dude) @ http://antfarm.home.dhs.org / http://antfarm.ma.cx
/ /\ /\ \ Please nuke ANT if replying by e-mail privately. If credit-
| |o o| | ing, then please kindly use Ant nickname and URL/link.
\ _ /
( )
Paul Sture
2018-09-06 17:03:23 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
Post by Dan Purgert
If these are the symptoms, what is the cure?
Going back to the old ways, {white, green, amber} on black :)
FWIW, these days I prefer either gray on blue, or navy on gray.
As I get older I find thet readability depends on the contrast.

Can you point me to an example, please?
--
We will not be enslaved through coercion, but by the lure of convenience.
Ant
2018-09-06 23:05:36 UTC
Reply
Permalink
Raw Message
Post by Dan Purgert
[...]
If these are the symptoms, what is the cure?
Going back to the old ways, {white, green, amber} on black :)
Use text web browsers like Lynx! :D
--
Quote of the Week: "For example, the tiny ant, a creature of great
industry, drags with its mouth whatever it can, and adds it to the heap
which she is piling up, not unaware nor careless of the future."
--Horace, Satires, Book I, I, 33.
Note: A fixed width font (Courier, Monospace, etc.) is required to see this signature correctly.
/\___/\Ant(Dude) @ http://antfarm.home.dhs.org / http://antfarm.ma.cx
/ /\ /\ \ Please nuke ANT if replying by e-mail privately. If credit-
| |o o| | ing, then please kindly use Ant nickname and URL/link.
\ _ /
( )
Computer Nerd Kev
2018-09-06 23:20:08 UTC
Reply
Permalink
Raw Message
Post by Dan Purgert
[...]
If these are the symptoms, what is the cure?
Going back to the old ways, {white, green, amber} on black :)
Seriously though,
What's not serious? I read that article as Green text on black. CSS
does tend to mess that up though, with page authors constantly
setting one colour (background or text) but not the other, and
leaving me to highlight text in order to get enough contrast to
read it (which doesn't always work either).

Still, an easy solution to that is to disable CSS, which I consider
to be 90% needless over-complication of pages that could be easily
done in plain HTML. Indeed I read all the articles here with CSS
disabled because it's just text and I want to control how that
looks on my monitor. That's clearly not the preference of the
article's author though.
--
__ __
#_ < |\| |< _#
Dirk T. Verbeek
2018-09-06 11:32:45 UTC
Reply
Permalink
Raw Message
Post by Rich
####################################################################
# ATTENTION: This post is a reference to a website. The poster of #
# this Usenet article is not the author of the referenced website. #
####################################################################
<URL:https://drewdevault.com/2018/09/04/Conservative-web-development.htm
l>
Today I turned off my ad blocker, enabled JavaScript, opened my network
monitor, and clicked the first link on Hacker News - a New York Times
article. It started by downloading a megabyte of data as it rendered the
page over the course of eight full seconds. The page opens with an
advertisement 281 pixels tall, placed before even the title of the
article. As I scrolled down, more and more requests were made,
downloading a total of 2.8 MB of data with 748 HTTP requests. An article
was weaved between a grand total of 1419 vertical pixels of ad space,
greater than the vertical resolution of my display. Another 153-pixel ad
is shown at the bottom, after the article. Four of the ads were
identical.
I was reminded to subscribe three times, for $1/week (after one year
this would become $3.75/week). One of these reminders attached itself to
the bottom of my screen and followed along as a scrolled. If I scrolled
up, it replaced this with a larger banner, which showed me three other
articles and an ad. I was asked for my email address once, though I
would have had to fill out a captcha to submit it. I took out my phone
and repeated the experiment. It took 15 seconds to load, and I estimate
the ads took up a vertical space equal to 4 times my phone's vertical
resolution, each ad alone taking up half of my screen.
The text of the article is a total of 9037 bytes, including the title,
author, and date. I downloaded the images relevant to the article,
including the 1477x10821 title image. Before I ran them through an
optimizer, they weighed 260 KB; after, 236 KB (using only lossless
optimizations). 8% of the total download was dedicated to the content. 5
discrete external companies were informed of my visit to the page and
given the opportunity to run artibrary JavaScript on it.
If these are the symptoms, what is the cure?
...
Someone needs to pay for all that bandwidth, hence the advertisers :)
The cure is to read the news from a paper, not a screen.
That way you still get the ads but at least you're not tracked.
Huge
2018-09-06 12:07:44 UTC
Reply
Permalink
Raw Message
Post by Dirk T. Verbeek
Post by Rich
####################################################################
# ATTENTION: This post is a reference to a website. The poster of #
# this Usenet article is not the author of the referenced website. #
####################################################################
<URL:https://drewdevault.com/2018/09/04/Conservative-web-development.htm
l>
Today I turned off my ad blocker, enabled JavaScript, opened my network
[26 lines snipped]
Post by Dirk T. Verbeek
Post by Rich
given the opportunity to run artibrary JavaScript on it.
If these are the symptoms, what is the cure?
...
Someone needs to pay for all that bandwidth, hence the advertisers :)
The cure is to read the news from a paper, not a screen.
That way you still get the ads but at least you're not tracked.
Really? Why would I want to read the ads? I find NoScript, Ghostery, AdBlock+
and a huge hosts file means I rarely see advertising.
--
Today is Prickle-Prickle, the 30th day of Bureaucracy in the YOLD 3184
~ Stercus accidit ~
Mike Spencer
2018-09-07 05:34:11 UTC
Reply
Permalink
Raw Message
Post by Huge
Really? Why would I want to read the ads? I find NoScript, Ghostery,
AdBlock+ ...
Do those utilities also block <noscript> tags/blocks that send what
appear to be [1] requests for bits devoted to tracking?
Post by Huge
...and a huge hosts file means I rarely see advertising.
Noscript tags are places to look for additions to /etc/hosts.
--
Mike Spencer Nova Scotia, Canada
Huge
2018-09-07 09:57:12 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
Post by Huge
Really? Why would I want to read the ads? I find NoScript, Ghostery,
AdBlock+ ...
Do those utilities also block <noscript> tags/blocks that send what
appear to be [1] requests for bits devoted to tracking?
TBH, I don't know. I suspect I have most of the tracking targets aliased
out anyway.

This is worth looking at;

https://pi-hole.net/
Post by Mike Spencer
Post by Huge
...and a huge hosts file means I rarely see advertising.
Noscript tags are places to look for additions to /etc/hosts.
Good point. I shall have a look at that.
--
Today is Setting Orange, the 31st day of Bureaucracy in the YOLD 3184
~ Stercus accidit ~
Rich
2018-09-07 13:56:52 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
Post by Huge
Really? Why would I want to read the ads? I find NoScript, Ghostery,
AdBlock+ ...
Do those utilities also block <noscript> tags/blocks that send what
appear to be [1] requests for bits devoted to tracking?
Post by Huge
...and a huge hosts file means I rarely see advertising.
Noscript tags are places to look for additions to /etc/hosts.
NoScript is a browser extension. It is not related to the HTML tag of
the same name.

Its purpose is to block javascript unless you, the user, choose to
specifically whitelist it (this is the most common situation, I believe
you can invert the meaning [allow all unless blacklisted] if you wish,
but doing so is kind of pointless even if possible).

So with the NoScript extension in the usual "block everything unless
whitelisted mode" none of the javascript on the page executes when the
page loads. You can then choose to allow specific parts to execute
(the granularity is by the source domain of the script).
Kerry Imming
2018-09-06 18:12:28 UTC
Reply
Permalink
Raw Message
If these are the symptoms, what is the cure?
Whatever happened to the concept of micro-payments?

In my ideal world each article from a publisher would cost X up to some
cap for the month.

The counter-argument:
There's a mental block that occurs when you have to pay per use though,
at least that's what I find. I subscribed to a newspaper for a year and
I estimate it cost me $0.30 per article I read. I still have Netflix
and it's costing over $1 per hour right now. I doubt I would
(knowingly) pay that much for each use but somehow the $200/year or
$10/month for "unlimited" use seems OK. Is there a study on this anywhere?

- Kerry
Rich
2018-09-06 18:22:18 UTC
Reply
Permalink
Raw Message
Post by Kerry Imming
If these are the symptoms, what is the cure?
Whatever happened to the concept of micro-payments?
In my ideal world each article from a publisher would cost X up to some
cap for the month.
There's a mental block that occurs when you have to pay per use though,
at least that's what I find. I subscribed to a newspaper for a year and
I estimate it cost me $0.30 per article I read. I still have Netflix
and it's costing over $1 per hour right now. I doubt I would
(knowingly) pay that much for each use but somehow the $200/year or
$10/month for "unlimited" use seems OK.
Is there a study on this anywhere?
A likely answer is yes, but I don't know of any to point toward.

But, your "up to some cap for the month" part of your ideal is
interesting.

I suspect part of the mental block for "pay-per-use" is because the
"final bill" is unknown until after one has consumed whatever they
consume.

But with a "pay per" and a cap of "X", then there is a clear "end"
point to the "how much will this finally cost" (that being the cap).

So, take netflix. You are fine with $10/month. So if you had to pay
$1/hour up to $10/month max, it might not create as much of a block.

Turned around, it might create a reverse incentive to "consume" until
hitting the cap, just to "get to the free stuff" (the units beyond the
cap).

But, for the month where you watch only 1 hour of netflix, your bill
goes down to $1. This, of course, hurts netflix because almost all of
the businesses running $X/month for unlimited are arranged around some
(often large) percentage not getting anywhere near "maxing out" their
usage and therefore providing a larger margin to the business.
Kerry Imming
2018-09-07 13:21:25 UTC
Reply
Permalink
Raw Message
Post by Rich
This, of course, hurts netflix because almost all of
the businesses running $X/month for unlimited are arranged around some
(often large) percentage not getting anywhere near "maxing out" their
usage and therefore providing a larger margin to the business.
I believe you are correct here. We'll probably never know what it
really costs companies like Netflix to provide their service on a
per-use basis. It's hard to be sure without that information, but I'd
likely prefer a per-use pricing with a monthly cap higher than their
current rate. That, or a lower monthly base rate with an additional
per-use price (i.e. the utility company model).

- Kerry
Rich
2018-09-07 14:02:49 UTC
Reply
Permalink
Raw Message
Post by Kerry Imming
Post by Rich
This, of course, hurts netflix because almost all of
the businesses running $X/month for unlimited are arranged around some
(often large) percentage not getting anywhere near "maxing out" their
usage and therefore providing a larger margin to the business.
I believe you are correct here. We'll probably never know what it
really costs companies like Netflix to provide their service on a
per-use basis.
No, we won't, as they would not cough up that info. We can, however,
guessitimate based upon pricing we see for similar raw services and the
likely 'cost' can be guessed to be quite low overall (as in a per-use
pricing that might be something like ten US cents to stream a 2h
movie to a customer. Now, there is copyright mafia overhead in that,
plus base costs to operate (customer service reps for phone/email comm,
although much of that today seems to often be oursourced to India or
Pakistan for $1/day per rep, so the cost there is also very low).
Post by Kerry Imming
It's hard to be sure without that information, but I'd likely prefer
a per-use pricing with a monthly cap higher than their current rate.
That, or a lower monthly base rate with an additional per-use price
(i.e. the utility company model).
The utility model might be a better price scheme. The base rate could
cover the base costs, then the per use simply covers what little cost
is involved in the actual cost to deliver.
Kerry Imming
2018-09-07 14:50:32 UTC
Reply
Permalink
Raw Message
Post by Rich
The utility model might be a better price scheme. The base rate could
cover the base costs, then the per use simply covers what little cost
is involved in the actual cost to deliver.
Plus profit, of course. Unfortunately their current pricing model is
likely much more lucrative, profit-wise.

- Kerry
Rich
2018-09-07 15:24:14 UTC
Reply
Permalink
Raw Message
Post by Kerry Imming
Post by Rich
The utility model might be a better price scheme. The base rate
could cover the base costs, then the per use simply covers what
little cost is involved in the actual cost to deliver.
Plus profit, of course.
True, for some reasonable profit percentage. Here I always fall back
to what is claimed as the grocery store margin (something in the range
of 2-5%) to pick something 'reasonable'. Whether that value is
actually true I don't know.
Post by Kerry Imming
Unfortunately their current pricing model is likely much more
lucrative, profit-wise.
Yep. Twenty million folks paying $10/month, with only 10% [1] even
arriving close to consuming $10/month worth of resources results in a
lot of excess profit.




[1] a pure guess
Eli the Bearded
2018-09-06 18:56:38 UTC
Reply
Permalink
Raw Message
Post by Kerry Imming
If these are the symptoms, what is the cure?
Whatever happened to the concept of micro-payments?
I think the big fear in micro-payments is how many of them there would
be. Consider if people payed for news automatically with a micro-payment
structure: $0.01 per page read. That would push all news towards both a
clickbait (to make you want to read it) world and a world with many,
many short pages with small content in them (so the cost per page to the
company is reduced). That's a lot like many sites today, except that the
micro-payer is the advertiser.

I think the solution is people asking for (and putting their money where
their mouth is) ad-free subscription services. Netflix for News, for
example.

Elijah
------
also avoid getting your search results from an advertising company
Kerry Imming
2018-09-07 13:03:33 UTC
Reply
Permalink
Raw Message
...
...That would push all news towards both a
clickbait (to make you want to read it) world and a world with many,
many short pages with small content in them (so the cost per page to the
company is reduced). That's a lot like many sites today, except that the
micro-payer is the advertiser.
Agreed; I don't know how to avoid that. We're actually seeing this on
television now also. Since DVRs started to allow commercial skipping
broadcasters seem to be putting in even more commercials.
I think the solution is people asking for (and putting their money where
their mouth is) ad-free subscription services. Netflix for News, for
example.
WSJ has (as far as I know) been successful with this. (I think they'd
be more successful if they'd drop their introductory rates. It's a
personal thing, but it annoys me that they punish continuing
subscribers.) The problem is I like to get contrasting views sometimes
but don't want a full subscription to that many sources.

- Kerry
Rich
2018-09-07 14:05:09 UTC
Reply
Permalink
Raw Message
Post by Kerry Imming
...
...That would push all news towards both a
clickbait (to make you want to read it) world and a world with many,
many short pages with small content in them (so the cost per page to the
company is reduced). That's a lot like many sites today, except that the
micro-payer is the advertiser.
Agreed; I don't know how to avoid that. We're actually seeing this on
television now also. Since DVRs started to allow commercial skipping
broadcasters seem to be putting in even more commercials.
They are also starting to overlay ads on the bottom quarter of the
show, after the commerical break (where one can't skip them without
also skipping the actual content.
Post by Kerry Imming
I think the solution is people asking for (and putting their money
where their mouth is) ad-free subscription services. Netflix for
News, for example.
WSJ has (as far as I know) been successful with this. (I think
they'd be more successful if they'd drop their introductory rates.
It's a personal thing, but it annoys me that they punish continuing
subscribers.) The problem is I like to get contrasting views
sometimes but don't want a full subscription to that many sources.
But, sadly, they allow free reads from facebook referals, which
provides a gaping hole through their paywall for anyone willing to fake
a facebook referral.
Eli the Bearded
2018-09-07 20:01:47 UTC
Reply
Permalink
Raw Message
Post by Kerry Imming
Agreed; I don't know how to avoid that. We're actually seeing this on
television now also. Since DVRs started to allow commercial skipping
broadcasters seem to be putting in even more commercials.
I don't watch much television, and even less broadcast (as opposed to
Netflixed) television. But I did start to notice five or so years ago
that commercials were now formatted to get the gist of the message
through on mute and played at double speed, which I took as a DVR
response.
Post by Kerry Imming
Post by Eli the Bearded
I think the solution is people asking for (and putting their money
where their mouth is) ad-free subscription services. Netflix for
News, for example.
WSJ has (as far as I know) been successful with this. (I think they'd
be more successful if they'd drop their introductory rates. It's a
personal thing, but it annoys me that they punish continuing
subscribers.) The problem is I like to get contrasting views sometimes
but don't want a full subscription to that many sources.
A subscription to the WSJ gets you news from non WSJ properties? That
would be news to me. I have household subscriptions to NYT, WaPo, and
Guardian, but not the Walleye.

Elijah
------
turned down a job working for the WSJ site in 1996 (bad location)
Kerry Imming
2018-09-10 13:29:14 UTC
Reply
Permalink
Raw Message
Post by Eli the Bearded
A subscription to the WSJ gets you news from non WSJ properties?
No. Sorry, that wasn't clear. I believe the WSJ has been successful
getting people to subscribe to their content. Getting alternate sources
requires (additional) subscriptions to those sources.

- Kerry
Computer Nerd Kev
2018-09-06 23:22:11 UTC
Reply
Permalink
Raw Message
Post by Rich
<URL:https://drewdevault.com/2018/09/04/Conservative-web-development.htm
l>
If these are the symptoms, what is the cure?
A less polite description of the cure:
http://motherfuckingwebsite.com/
--
__ __
#_ < |\| |< _#
Eli the Bearded
2018-09-07 00:53:09 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
http://motherfuckingwebsite.com/
And if you load it in lynx, the google analytics in the page definitely
won't run.

Elijah
------
<!-- yes, I know...wanna fight about it? -->
Mike Spencer
2018-09-07 06:04:55 UTC
Reply
Permalink
Raw Message
Post by Eli the Bearded
Post by Computer Nerd Kev
http://motherfuckingwebsite.com/
And if you load it in lynx, the google analytics in the page definitely
won't run.
Ha. After all that, he has a js block doing google analytics. Shpx!
Immer mit der js. Der Bursche!

The trick is to concoct a filter that encounters every web page
*before* it reaches the HTML parser, DOM thingy, whatever. Regex'es
may not be good enough to parse HTML correctly but they're good enough
to strip all the <LINK... and <META... tags, all the <SCRIPT>, <SVG>
blocks and whatever else you don't like.

Sadly, I don;t think I'm a wizardly enough hacker to implement that
for HTTPS but I'll soon have to give it a try when I finally abandon
Netscape 4.76 (because everybody, including cartoons, insists on using
the latest crypto). Maybe with a little help from various places on
Usenet.
--
Mike Spencer Nova Scotia, Canada
Eli the Bearded
2018-09-07 20:07:03 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
The trick is to concoct a filter that encounters every web page
*before* it reaches the HTML parser, DOM thingy, whatever. Regex'es
may not be good enough to parse HTML correctly but they're good enough
to strip all the <LINK... and <META... tags, all the <SCRIPT>, <SVG>
blocks and whatever else you don't like.
That's "No Script" territory right there.
Post by Mike Spencer
Sadly, I don;t think I'm a wizardly enough hacker to implement that
for HTTPS but I'll soon have to give it a try when I finally abandon
Netscape 4.76 (because everybody, including cartoons, insists on using
the latest crypto). Maybe with a little help from various places on
Usenet.
I posted about this problem of yours in the slackware group.

Message-ID: <eli$***@qaz.wtf>
Date: Fri, 6 Jul 2018 03:01:33
Subject: Re: Slackware's future
Post by Mike Spencer
Why do I want NN 4.76? It allows me easily to turn off js and
[...]

Payment Card Industry Data Security Standard ("PCI Compliance")
requires sites stop accepting older, now easily broken, SSL / TLS
versions. Anything older than TLS 1.2 should be refused as of July
2018 by any company that deals with credit cards, or any company
forced into PCI compliance by dint of other companies they interact
with.

You might want to look into running some sort of HTTPS-to-HTTP
endpoint proxy in front of your browser instead. Technically it is
quite feasible, if a tad complicated, but I can't think of one that
exists. You might be able to get a MITM proxy (eg mitmproxy) to do
it for you, or at least to talk to the sites in secure TLS and your
browser in insecure SSL.

Elijah
------
remembers well a problem like that
Computer Nerd Kev
2018-09-07 22:44:35 UTC
Reply
Permalink
Raw Message
Post by Eli the Bearded
Post by Mike Spencer
The trick is to concoct a filter that encounters every web page
*before* it reaches the HTML parser, DOM thingy, whatever. Regex'es
may not be good enough to parse HTML correctly but they're good enough
to strip all the <LINK... and <META... tags, all the <SCRIPT>, <SVG>
blocks and whatever else you don't like.
That's "No Script" territory right there.
Indeed, though if such Add-ons aren't an option then redirecting
domains using the /etc/hosts file is a common alternative, though
it solves the problem in a different way to what you're proposing.
Post by Eli the Bearded
Post by Mike Spencer
Sadly, I don;t think I'm a wizardly enough hacker to implement that
for HTTPS but I'll soon have to give it a try when I finally abandon
Netscape 4.76 (because everybody, including cartoons, insists on using
the latest crypto). Maybe with a little help from various places on
Usenet.
I posted about this problem of yours in the slackware group.
Date: Fri, 6 Jul 2018 03:01:33
Subject: Re: Slackware's future
Post by Mike Spencer
Why do I want NN 4.76? It allows me easily to turn off js and
[...]
Payment Card Industry Data Security Standard ("PCI Compliance")
requires sites stop accepting older, now easily broken, SSL / TLS
versions. Anything older than TLS 1.2 should be refused as of July
2018 by any company that deals with credit cards, or any company
forced into PCI compliance by dint of other companies they interact
with.
#START RANT

Now _why_ is it that all sorts of websites that don't accept payments
(credit card or otherwise), don't require log-in, in fact don't offer
any method of submitting information that could be vaguely sensitive,
have to follow suit and make it difficult for users running old
software, and/or devices on which newer software might not be
available?

It's completely pointless, only serving to put an expiration date
on every piece of software (and many devices) designed to work on
the internet!

#END RANT
Post by Eli the Bearded
You might want to look into running some sort of HTTPS-to-HTTP
endpoint proxy in front of your browser instead. Technically it is
quite feasible, if a tad complicated, but I can't think of one that
exists. You might be able to get a MITM proxy (eg mitmproxy) to do
it for you, or at least to talk to the sites in secure TLS and your
browser in insecure SSL.
For the latter approach, I've been using this lately:
http://www.YouTubeProxy.pw

Such services come and go fairly regularly though.
--
__ __
#_ < |\| |< _#
Dan Purgert
2018-09-08 00:02:44 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
Post by Eli the Bearded
Payment Card Industry Data Security Standard ("PCI Compliance")
requires sites stop accepting older, now easily broken, SSL / TLS
versions. Anything older than TLS 1.2 should be refused as of July
2018 by any company that deals with credit cards, or any company
forced into PCI compliance by dint of other companies they interact
with.
#START RANT
Now _why_ is it that all sorts of websites that don't accept payments
(credit card or otherwise), don't require log-in, in fact don't offer
any method of submitting information that could be vaguely sensitive,
have to follow suit and make it difficult for users running old
software, and/or devices on which newer software might not be
available?
Because they don't. There's nothing stopping you from running plain
http to serve a website, or still offer SSL2/3 if you really want.
Although the gatekeepers of the internet (Google, Mozilla, etc.) will
likely tell people your site is bad.
--
|_|O|_| Registered Linux user #585947
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: 05CA 9A50 3F2E 1335 4DC5 4AEE 8E11 DDF3 1279 A281
Eli the Bearded
2018-09-08 00:13:25 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
#START RANT
Now _why_ is it that all sorts of websites that don't accept payments
(credit card or otherwise), don't require log-in, in fact don't offer
any method of submitting information that could be vaguely sensitive,
have to follow suit and make it difficult for users running old
software, and/or devices on which newer software might not be
available?
It's completely pointless, only serving to put an expiration date
on every piece of software (and many devices) designed to work on
the internet!
#END RANT
Some possible reasons:

1) Shared hosting services with the same SSL config across everything.

2) Default configuration files suitable for "current industry
standards".

3) Someone in the company looks at a security scan report regularly
and pressures people to keep things up to date.

I've certainly seen all three of the above as factors for having some
particular SSL config. Then there's a newer possibility:

4) The SSL config comes from the Let's Encrypt certbot and that has
set standards per (2).

There certainly is a good argument that if you need https, you should do
it right. Unfortunately many people don't really need https EXCEPT that
there people out there actively modifying other people's HTML to add or
change ads within the pages, and https is the only way to ensure the
page is delivered unmolested to the end user. (This is also probably
part of why Google is pushing sites to have https. They don't want their
precious ad views stolen.)

Elijah
------
let Let's Encrypt select SSL standards for his personal site
Huge
2018-09-08 09:38:58 UTC
Reply
Permalink
Raw Message
Post by Eli the Bearded
Post by Computer Nerd Kev
#START RANT
Now _why_ is it that all sorts of websites that don't accept payments
(credit card or otherwise), don't require log-in, in fact don't offer
any method of submitting information that could be vaguely sensitive,
have to follow suit and make it difficult for users running old
software, and/or devices on which newer software might not be
available?
It's completely pointless, only serving to put an expiration date
on every piece of software (and many devices) designed to work on
the internet!
#END RANT
1) Shared hosting services with the same SSL config across everything.
2) Default configuration files suitable for "current industry
standards".
3) Someone in the company looks at a security scan report regularly
and pressures people to keep things up to date.
And I've had (big) arguments with management about all three of those
for "brochureware" sites where it's all completely pointless.
--
Today is Sweetmorn, the 32nd day of Bureaucracy in the YOLD 3184
~ Stercus accidit ~
Rich
2018-09-08 00:41:17 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
Post by Eli the Bearded
Payment Card Industry Data Security Standard ("PCI Compliance")
requires sites stop accepting older, now easily broken, SSL / TLS
versions. Anything older than TLS 1.2 should be refused as of
July 2018 by any company that deals with credit cards, or any
company forced into PCI compliance by dint of other companies
they interact with.
#START RANT
Now _why_ is it that all sorts of websites that don't accept payments
(credit card or otherwise), don't require log-in, in fact don't offer
any method of submitting information that could be vaguely sensitive,
have to follow suit and make it difficult for users running old
software, and/or devices on which newer software might not be
available?
It's completely pointless, only serving to put an expiration date
on every piece of software (and many devices) designed to work on
the internet!
#END RANT
The most popular /excuse/ offered for "why" this is the case is that
without joes-blog.com being offered over https, it is impossible for
you, the blog reader, to know that you've really retreived
"joes-blog.com" vs. "evil-blog.com" who is masquerading as
"joes-blog.com".

Of course, when this reason is offered, there is never any accompanying
explanation given for why it is so important to be certain that
"joes-blog.com" as retreived at timestamp X is the authentic
"joes-blog.com" when the same is a read-only blog from the point of
view of the reader.

A second reason offered, although far less common than the one above,
but in my opinion the more reasonable one to offer, is that by
retreiving "joes-blog.com" over https one prevents one's ISP from
modifying the page content to suit the ISP's desires to inject
advertising and/or other "customer alerts and notices" into pages
browsed. The reason I consider this one more reasonable is that there
have been documented instances of ISP's doing exactly this to either
inject copyright violation notices or to inject their own paid
advertisements into the pages their customers browse. Yes, I might not
care too much if just "joes-blog.com" gets swapped by "evil-blog.com",
but I'd surely care a lot if my ISP is always injecting extra
advertising (or worse, deciding that "topic X" is forbidden and I
should not be even allowed to browse for it) into everything I browse.
Using https is not a cureall for the second, as the domain name of
"forbidden topic X" is visible to the ISP in order to make the initial
connection, so there is still some possibiilty of filtering. But at
least it does prevent them from performing general meddling on a
constant basis.
Computer Nerd Kev
2018-09-08 08:03:33 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
Post by Eli the Bearded
I posted about this problem of yours in the slackware group.
Date: Fri, 6 Jul 2018 03:01:33
Subject: Re: Slackware's future
Post by Mike Spencer
Why do I want NN 4.76? It allows me easily to turn off js and
[...]
Payment Card Industry Data Security Standard ("PCI Compliance")
requires sites stop accepting older, now easily broken, SSL / TLS
versions. Anything older than TLS 1.2 should be refused as of July
2018 by any company that deals with credit cards, or any company
forced into PCI compliance by dint of other companies they interact
with.
#START RANT
Now _why_ is it that all sorts of websites that don't accept payments
(credit card or otherwise), don't require log-in, in fact don't offer
any method of submitting information that could be vaguely sensitive,
have to follow suit and make it difficult for users running old
software, and/or devices on which newer software might not be
available?
It's completely pointless, only serving to put an expiration date
on every piece of software (and many devices) designed to work on
the internet!
#END RANT
In response to the valid points made in replies, I want to point out
that I did not mean to imply that providing the option of HTTPS is
bad, only forcing it when there is no significant security reason
to do so. On my own websites I have HTTPS enabled using Let's
Encrypt, but I only have unencrypted HTTP connections disabled for
certain pages where information is actually able to be submitted.

If a user wants to use HTTPS for whatever reason, I have no problem
allowing them to do so (and to run an add-on to use it wherever
available, if they so desire). My problem is where sites deny
enencrypted HTTP connections completely and thereby prevent access
using older software not kept up to date with the latest security
protocols.

The protocols _should_ be up to date (who knows, maybe the person
using TLS 1.3 to view their TV guide actually has a good reason
for it), but when the information transmitted is not sensitive
people should no be forced to use this encryption.

As for web hosts doing it automatically, I guess that's possible.
But I doubt at least that complex websites would want their host
messing with their .htaccess files willy nilly. Besides the fact
that the first ones that did it are big enough that they would
be running their own servers.
--
__ __
#_ < |\| |< _#
Richard Kettlewell
2018-09-08 09:42:54 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
The protocols _should_ be up to date (who knows, maybe the person
using TLS 1.3 to view their TV guide actually has a good reason
for it), but when the information transmitted is not sensitive
people should no be forced to use this encryption.
Firstly, it’s not just about confidentiality; integrity matters too.

As a concrete example, I don’t want end users’ ISPs inserting adverts
into pages from my website (or anyone else’s, but I have less control
over that).

Secondly, providing an unsecured option is just asking for downgrade
attacks.

I don’t think a niche interest in retrocomputing is sufficient to
justify compromising on these þoints.
--
https://www.greenend.org.uk/rjk/
Ivan Shmakov
2018-09-08 11:05:28 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
The protocols _should_ be up to date (who knows, maybe the person
using TLS 1.3 to view their TV guide actually has a good reason for
it), but when the information transmitted is not sensitive people
should no be forced to use this encryption.
Seconded.
Firstly, it's not just about confidentiality; integrity matters too.
As a concrete example, I don't want end users' ISPs inserting adverts
into pages from my website (or anyone else's, but I have less control
over that).
While that may offer considerable protection against ISP
"getting creative," it wouldn't help against, say, a rogue CA
in the user's trust list, a (not necessarily malicious)
browser extension, -- or the hoster of your site misbehaving.

As an alternative, offering OpenPGP signatures (generated on
a system you have physical control over) will allow for the
integrity of the data you provide to be verified even in the
case that your site becomes inaccessible (so long as the data
was previously retrieved by the user -- or a third party.)

Personally, I believe that this choice should be up for the user
to make. And if they find out that their ISP does unreasonable
things to their traffic, they should have every right to complain
to them, as well as to switch to HTTPS, Tor, etc.

(Or they may decide not to bother. And frankly, if they don't,
why, exactly, should I?)

... Or they can decide to "get creative" with their own traffic
-- an option that's not nearly as accessible in the case of
"HTTPS everywhere." Of course, with contemporary browsers being
also JavaScript implementations, it should be possible for the
user to alter the Web pages from /within/ their browser. However:

* an "augmented browsing" software implemented as an HTTP proxy
will be available across all the browsers in use -- even those
not offering any extension mechanism;

* at least for Firefox, writing extensions appears to be an
overcomplicated affair; or at least I've never figured out why
a Emacs extension can be a single-line "(provide 'something)"
file, whereas Firefox seems to demand an entire forest of files.
Secondly, providing an unsecured option is just asking for downgrade
attacks.
AFAICT, downgrade attacks can only be meaningfully dealt with on
the client's side. Granted, HSTS does that, but with it being
an abomination by itself...
I don't think a niche interest in retrocomputing is sufficient to
justify compromising on these points.
I believe otherwise.
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Richard Kettlewell
2018-09-08 11:31:05 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
While that may offer considerable protection against ISP
"getting creative," it wouldn't help against, say, a rogue CA
in the user's trust list, a (not necessarily malicious)
browser extension, -- or the hoster of your site misbehaving.
As an alternative, offering OpenPGP signatures (generated on
a system you have physical control over) will allow for the
integrity of the data you provide to be verified even in the
case that your site becomes inaccessible (so long as the data
was previously retrieved by the user -- or a third party.)
The trust anchor situation for web PKI is certainly imperfect
(Banrisul’s attackers didn’t even need a rogue CA). I can tell you with
high confidence that OpenPGP isn’t going to solve it, though.
Post by Ivan Shmakov
Personally, I believe that this choice should be up for the user
to make. And if they find out that their ISP does unreasonable
things to their traffic, they should have every right to complain
to them, as well as to switch to HTTPS, Tor, etc.
It seems that you think that having your web traffic spied on and
tampered with should be an opt-out affair. I think it should be opt-in,
and as it turns out browser developers largely agree with me.

The tools for opting in already are already well-developed in the form
of browser extensions and https middleware. As it turns out, very few
end users opt in by choice - they are used either because they have been
deceived into installing an extension or because an IT department or
equipment vendor imposes middleware on them.
--
https://www.greenend.org.uk/rjk/
Ivan Shmakov
2018-09-08 11:55:58 UTC
Reply
Permalink
Raw Message
Post by Richard Kettlewell
While that may offer considerable protection against ISP "getting
creative," it wouldn't help against, say, a rogue CA in the user's
trust list, a (not necessarily malicious) browser extension, -- or
the hoster of your site misbehaving.
As an alternative, offering OpenPGP signatures (generated on a
system you have physical control over) will allow for the integrity
of the data you provide to be verified even in the case that your
site becomes inaccessible (so long as the data was previously
retrieved by the user -- or a third party.)
The trust anchor situation for web PKI is certainly imperfect
(Banrisul's attackers didn't even need a rogue CA). I can tell you
with high confidence that OpenPGP isn't going to solve it, though.
I suppose we disagree on what to consider a problem, then.
For instance, from where I stand, Debian has solved the problem
of tampering years before HTTPS became mainstream.
Post by Richard Kettlewell
Personally, I believe that this choice should be up for the user to
make. And if they find out that their ISP does unreasonable things
to their traffic, they should have every right to complain to them,
as well as to switch to HTTPS, Tor, etc.
It seems that you think that having your web traffic spied on and
tampered with should be an opt-out affair. I think it should be
opt-in, and as it turns out browser developers largely agree with me.
Well, I guess I already made it clear that I largely disagree
with browser developers.

Though I'm at loss on how browser developers can decide whether
any given site is HTTPS-only or not.
Post by Richard Kettlewell
The tools for opting in already are already well-developed in the
form of browser extensions
That doesn't help much as the users most interested in plain-HTTP
Web reading don't typically run browsers popular with the
extension writers.
Post by Richard Kettlewell
and https middleware.
If you mean the likes of Mitmproxy, Sslstrip, Sslsplit, etc., --
yes, I suppose that helps. (Didn't try any as of yet; I'm
pretty sure I'd need to hack one to my taste, and with two of
them being implemented in Python, I'd rather not.)

[...]
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Richard Kettlewell
2018-09-08 12:29:06 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
Post by Richard Kettlewell
The trust anchor situation for web PKI is certainly imperfect
(Banrisul's attackers didn't even need a rogue CA). I can tell you
with high confidence that OpenPGP isn't going to solve it, though.
I suppose we disagree on what to consider a problem, then.
For instance, from where I stand, Debian has solved the problem
of tampering years before HTTPS became mainstream.
Debian knows its developers, a community of just a few hundred
keyholders, and its users know Debian, a single keyholder. Collectively
they are willing, and technically able, to maintain their portion of the
web of trust (and do so better than any other population I know of,
which is already a hint that reasoning from Debian’s PKI to other
contexts is risky).

In the web there are millions of keyholders, most of whom have little or
no idea they are engaging with a cryptographic system, and wouldn’t know
how to operate one if they did. The situation is not comparable.
Post by Ivan Shmakov
Post by Richard Kettlewell
It seems that you think that having your web traffic spied on and
tampered with should be an opt-out affair. I think it should be
opt-in, and as it turns out browser developers largely agree with me.
Well, I guess I already made it clear that I largely disagree
with browser developers.
Though I'm at loss on how browser developers can decide whether
any given site is HTTPS-only or not.
They don’t. But they deprecate old, broken versions of https, report
unencrypted sites as insecure, and provide infrastructure for securing
the upgrade from http to https.
Post by Ivan Shmakov
Post by Richard Kettlewell
The tools for opting in already are already well-developed in the
form of browser extensions
That doesn't help much as the users most interested in plain-HTTP
Web reading don't typically run browsers popular with the
extension writers.
I don’t see why everyone else should suffer reduced security for their
benefit.
Post by Ivan Shmakov
Post by Richard Kettlewell
and https middleware.
If you mean the likes of Mitmproxy, Sslstrip, Sslsplit, etc., --
yes, I suppose that helps. (Didn't try any as of yet; I'm
pretty sure I'd need to hack one to my taste, and with two of
them being implemented in Python, I'd rather not.)
I mean things like Bluecoat devices, Lenovo’s MITM adware, etc.
--
https://www.greenend.org.uk/rjk/
Ivan Shmakov
2018-09-08 14:20:36 UTC
Reply
Permalink
Raw Message
Post by Richard Kettlewell
Post by Richard Kettlewell
The trust anchor situation for web PKI is certainly imperfect
(Banrisul's attackers didn't even need a rogue CA). I can tell you
with high confidence that OpenPGP isn't going to solve it, though.
I suppose we disagree on what to consider a problem, then. For
instance, from where I stand, Debian has solved the problem of
tampering years before HTTPS became mainstream.
Debian knows its developers, a community of just a few hundred
keyholders, and its users know Debian, a single keyholder.
Collectively they are willing, and technically able, to maintain
their portion of the web of trust (and do so better than any other
population I know of, which is already a hint that reasoning from
Debian's PKI to other contexts is risky).
In the web there are millions of keyholders, most of whom have little
or no idea they are engaging with a cryptographic system, and wouldn't
know how to operate one if they did. The situation is not comparable.
I don't seem to understand what argument you're trying to make.

That incompetence can ruin any security scheme is something I'd
consider obvious. As such, securing a resource, whether Web or
otherwise, requires training. I fail to see how TLS is any
better in that respect than any other solution, be it OpenPGP,
Tor, or something else.

Moreover, I'd venture to guess that the vast majority of Web
users wouldn't need to maintain more than a few dozens of keys
in their WoT.

Though yet again, that's not the point I'm trying to make. With
the cell telephony remaining largely unprotected and subject to
being intercepted and recorded by the government agencies all
around the world (as acknowledged by the government officials
themselves; say, [1]); with the banks having access to generous
amounts of information about who pays whom and how much; the
focus on "oh noes! they can add ads to my Web pages!" seems
rather lacking in the consistency department.

Also, from there, it isn't hard to imagine the push for HTTPS
being more of a distraction than something that arises out of
genuine concern over privacy or integrity. Although I have to
admit that Google being concerned with their ads being replaced
with the ISP's own ones can indeed be a factor.

Or, to put it differently, for those concerned with privacy, the
priority should be going cash-only, not HTTPS-only.

[1] http://telegraph.co.uk/news/uknews/terrorism-in-the-uk/11976008/MI5-and-GCHQ-secretly-bulk-collecting-British-publics-phone-and-email-records-for-years-Theresa-May-reveals.html
Post by Richard Kettlewell
Post by Richard Kettlewell
It seems that you think that having your web traffic spied on and
tampered with should be an opt-out affair.
To clarify: I would be fine with the browser defaulting to
"HTTPS only," allowing me to opt-out -- either for specific URIs,
or altogether. That the server administrators make a decision
for me is what I consider a problem.
Post by Richard Kettlewell
Post by Richard Kettlewell
I think it should be opt-in, and as it turns out browser developers
largely agree with me.
Well, I guess I already made it clear that I largely disagree with
browser developers.
For instance, I believe that running software offered for download
at an arbitrary Web resource should be an opt-in experience; as
should be running software that cannot be trusted /by design./

That Firefox still does /not/ offer any NoScript-like component
out of the box, as well as their adoption of EME, suggests that
its developers think otherwise.
Post by Richard Kettlewell
Though I'm at loss on how browser developers can decide whether
any given site is HTTPS-only or not.
They don't. But they deprecate old, broken versions of https,
If that results in the user being required to explicitly allow
further communication with the remote party, that's reasonable.
If that results in the loss of interoperability, that's not.
Post by Richard Kettlewell
report unencrypted sites as insecure,
And now I'm curious as to whom my site got reported to?
Post by Richard Kettlewell
and provide infrastructure for securing the upgrade from http to
https.
IMO, calling the automatic replacement of "http:" in the URI
with "https:" for a list of hosts an "upgrade" is borderline
newspeak. As is calling such list "infrastructure."

[...]
Post by Richard Kettlewell
Post by Richard Kettlewell
The tools for opting in already are already well-developed in the
form of browser extensions
That doesn't help much as the users most interested in plain-HTTP
Web reading don't typically run browsers popular with the extension
writers.
I don't see why everyone else should suffer reduced security for
their benefit.
I don't think it was shown in this discussion that they would.
Post by Richard Kettlewell
Post by Richard Kettlewell
and https middleware.
If you mean the likes of Mitmproxy, Sslstrip, Sslsplit, etc., --
yes, I suppose that helps. (Didn't try any as of yet; I'm pretty
sure I'd need to hack one to my taste, and with two of them being
implemented in Python, I'd rather not.)
I mean things like Bluecoat devices, Lenovo's MITM adware, etc.
These don't help, either.
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Huge
2018-09-08 15:23:11 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
Post by Richard Kettlewell
Post by Richard Kettlewell
The trust anchor situation for web PKI is certainly imperfect
(Banrisul's attackers didn't even need a rogue CA). I can tell you
with high confidence that OpenPGP isn't going to solve it, though.
I suppose we disagree on what to consider a problem, then. For
instance, from where I stand, Debian has solved the problem of
tampering years before HTTPS became mainstream.
Debian knows its developers, a community of just a few hundred
keyholders, and its users know Debian, a single keyholder.
Collectively they are willing, and technically able, to maintain
their portion of the web of trust (and do so better than any other
population I know of, which is already a hint that reasoning from
Debian's PKI to other contexts is risky).
In the web there are millions of keyholders, most of whom have little
or no idea they are engaging with a cryptographic system, and wouldn't
know how to operate one if they did. The situation is not comparable.
I don't seem to understand what argument you're trying to make.
That incompetence can ruin any security scheme is something I'd
consider obvious. As such, securing a resource, whether Web or
otherwise, requires training. I fail to see how TLS is any
better in that respect than any other solution, be it OpenPGP,
Tor, or something else.
Moreover, I'd venture to guess that the vast majority of Web
users wouldn't need to maintain more than a few dozens of keys
in their WoT.
Can I suggest that you come down out of your ivory tower and go and talk
to some actual *users*. You know, people who look at you blankly when
you talk about URLs. People who think crypto is for terrorists and
paedophiles. People who wouldn't know a key if you graved it in stone
and hit them over the head with it. Frankly, I don't know what it is
you're whinging about.
--
Today is Sweetmorn, the 32nd day of Bureaucracy in the YOLD 3184
~ Stercus accidit ~
Ivan Shmakov
2018-09-08 15:45:49 UTC
Reply
Permalink
Raw Message
X-No-Archive: Yes
While I find the X-No-Archive: header rather pointless, I suppose
I can as well honor the request.

[...]

... Which is to say, the people who would neither know nor care
if their connection to the server is compromised or not?

Well, I believe that they too have every right to use plain HTTP.
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Rich
2018-09-08 17:01:26 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
X-No-Archive: Yes
While I find the X-No-Archive: header rather pointless, I suppose
I can as well honor the request.
That one is a request to the nntp servers, not to you the end reader.
Post by Ivan Shmakov
... Which is to say, the people who would neither know nor care
if their connection to the server is compromised or not?
Well, I believe that they too have every right to use plain HTTP.
Those users don't know http from https, and usually don't care. If the
site appears after they search for it in google, they are happy.
Ivan Shmakov
2018-09-08 17:55:44 UTC
Reply
Permalink
Raw Message
Post by Rich
Post by Ivan Shmakov
X-No-Archive: Yes
While I find the X-No-Archive: header rather pointless, I suppose
I can as well honor the request.
That one is a request to the nntp servers, not to you the end reader.
My point is that the parts of the XNA article I quote /will/ be
archived, which would go against its author's wishes.

Not to mention that I /do/ archive Usenet (even though I do not
operate a server.) Consider, e. g.:

## LIST OVERVIEW.FMT
Subject: From: Date: Message-ID: References: Bytes: Lines: Xref:full
## GROUP sci.electronics.repair, XOVER 54765
54765 Re: OT. Talking to an Apple. Losing the will to live ...
[...] 23 Sep 2013 20:06:08 GMT
<***@mid.individual.net>
<Cgj%t.87895$***@fx27.am4> <l1m8ko$2l3$***@dont-email.me>
<***@brightview.co.uk>
1418 14
Xref: aioe.org sci.electronics.repair:54765 uk.d-i-y:3847670

Of course, my intent is to elide XNA articles from the published
version of the archives. (Which is something that has no set
date as of yet.)
Post by Rich
Post by Ivan Shmakov
... Which is to say, the people who would neither know nor care if
their connection to the server is compromised or not?
Well, I believe that they too have every right to use plain HTTP.
Those users don't know http from https, and usually don't care. If
the site appears after they search for it in google, they are happy.
That's quite the point: sites going HTTPS-only aren't doing so
for the benefit of their users; rather, as you mention elsewhere
in this thread, they're pressured into that so not to lose their
standing among the search results.
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Rich
2018-09-08 18:15:36 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
Post by Rich
Post by Ivan Shmakov
While I find the X-No-Archive: header rather pointless, I suppose
I can as well honor the request.
That one is a request to the nntp servers, not to you the end reader.
My point is that the parts of the XNA article I quote /will/ be
archived, which would go against its author's wishes.
Not to mention that I /do/ archive Usenet (even though I do not
Yes, and as you say, you "do not operate a server" (so the header is
not aimed at you).

It came about when DejaNews started up back in 1995 and started
permanent archiving of all Usenet postings. Prior to that, a given
Usenet article existed on Usenet until the various NNTP servers expired
it out of the pool (which was common at that time, storage being both
small and expensive). The change to there now being a "wayback"
machine for Usenet that would allow an old post to be ressurected years
later resulted in that header.

Details at https://en.wikipedia.org/wiki/X-No-Archive

But you, an individual (who happens to be archiving), no, it was never
intended or meant to cover that.

Whether DejaNews' current linear successor (google groups) honors the
header anymore I do not know.
Post by Ivan Shmakov
Post by Rich
Post by Ivan Shmakov
Well, I believe that they too have every right to use plain HTTP.
Those users don't know http from https, and usually don't care. If
the site appears after they search for it in google, they are happy.
That's quite the point: sites going HTTPS-only aren't doing so
for the benefit of their users; rather, as you mention
elsewhere in this thread, they're pressured into that so not
to lose their standing among the search results.
Yep. Pressure from the sites stream of "hits" (and advertising
revenue) is what is causing so many to change their web server configs
to enforce https only.

But your local browser is (not yet) arbitrarily making the switch (it
may seem that way, given that you type http://example.com which almost
immediately becomes https://example.com. But if you look at the
network tab in dev tools while you do it you'll see the 3xx redirect
code coming back from the source web-server that is the actual cause of
the change in your browser's url bar.
Richard Kettlewell
2018-09-08 15:44:43 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
I don't seem to understand what argument you're trying to make.
The point is that your apparent proposal that OpenPGP could replace web
PKI is not realistic.
Post by Ivan Shmakov
Post by Richard Kettlewell
report unencrypted sites as insecure,
And now I'm curious as to whom my site got reported to?
The user.
Post by Ivan Shmakov
Post by Richard Kettlewell
and provide infrastructure for securing the upgrade from http to
https.
IMO, calling the automatic replacement of "http:" in the URI
with "https:" for a list of hosts an "upgrade" is borderline
newspeak.
It’s normal industry terminology.
--
https://www.greenend.org.uk/rjk/
Ivan Shmakov
2018-09-08 16:00:26 UTC
Reply
Permalink
Raw Message
Post by Richard Kettlewell
Post by Ivan Shmakov
I don't seem to understand what argument you're trying to make.
The point is that your apparent proposal that OpenPGP could replace
web PKI is not realistic.
First, that should've been "complement"; apologies if I wasn't
being clear enough.

That said, with respect to integrity, my primary concern is
end-to-end verification between the author (as opposed to:
"server") and the reader. In this case, not only OpenPGP
"could replace" Web PKI -- it's already well-established.

Granted, it's mostly restricted to software distribution, but
the very same issues arise with, say, blogs; and so the same
solutions can be applied.

[...]
Post by Richard Kettlewell
Post by Ivan Shmakov
Post by Richard Kettlewell
and provide infrastructure for securing the upgrade from http to
https.
IMO, calling the automatic replacement of "http:" in the URI with
"https:" for a list of hosts an "upgrade" is borderline newspeak.
It's normal industry terminology.
You mean, like "cloud"? or "trusted computing"? ([1] comes to mind.)

[1] http://lafkon.net/tc/
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Rich
2018-09-08 17:05:33 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
Post by Richard Kettlewell
Post by Ivan Shmakov
I don't seem to understand what argument you're trying to make.
The point is that your apparent proposal that OpenPGP could replace
web PKI is not realistic.
First, that should've been "complement"; apologies if I wasn't
being clear enough.
That said, with respect to integrity, my primary concern is
"server") and the reader. In this case, not only OpenPGP
"could replace" Web PKI -- it's already well-established.
Granted, it's mostly restricted to software distribution, but
the very same issues arise with, say, blogs; and so the same
solutions can be applied.
The big stumbling block to your pgp concept is that at present there is
no defined method to attach said signatures to web pages such that the
signatures can be communicated to a browser in a way that it could use
it to verify the integrety of the web page being delivered.

Now, I don't mean that such could not be developed. A new http header
or a html meta element could carry a pgp signature of the page across
the connection to the browser for the browser to then use to verify the
signature. But that does not exist at the moment. And its a bit of a
chicken and egg now. No site would add the header or the meta tag if
zero browsers are going to verify it, and the browser writers are not
likely to add support for automatic verification without enough sites
providing signatures.
Ivan Shmakov
2018-09-09 05:07:23 UTC
Reply
Permalink
Raw Message
Post by Rich
Post by Ivan Shmakov
That said, with respect to integrity, my primary concern is
end-to-end verification between the author (as opposed to: "server")
and the reader. In this case, not only OpenPGP "could replace" Web
PKI -- it's already well-established.
Granted, it's mostly restricted to software distribution, but the
very same issues arise with, say, blogs; and so the same solutions
can be applied.
The big stumbling block to your pgp concept is that at present there
is no defined method to attach said signatures to web pages such that
the signatures can be communicated to a browser in a way that it
could use it to verify the integrity of the web page being delivered.
I was thinking more along the lines of:

$ tar -jcf blog.tar.bz2 -- blog
$ gpg --detach-sign -- blog.tar.bz2
...
$ rsync -cb -t --suffix=.~$(date +%s)~ \
-- blog.tar.bz2{,.sig} ***@website.example.com:public/
$

And, conversely, on the reader's side:

$ wget -x -- website.example.com/~jrh/blog.tar.bz2{,.sig}
...
$ gpg --verify -- blog.tar.bz2.sig
$ tar -jx -- blog < blog.tar.bz2
$ sensible-browser -- blog/index.xhtml

That, of course, can be automated for periodic updates.

Surely, that won't work for an average Joe, but then again, as
was already mentioned in this thread, average Joe does /not/
care about the blog not being tampered by a third party.
(Because who /do/ care /do/ learn OpenPGP.)
Post by Rich
Now, I don't mean that such could not be developed. A new http
header or a html meta element could carry a pgp signature of the page
across the connection to the browser for the browser to then use to
verify the signature.
That'd be <link rel="openpgp-signature" href="foo.xhtml.sig" />
HTML-wise. Alternatively, for HTTP/1.1: Link: <foo.xhtml.sig>;
rel="openpgp-signature".

However, as the http://efail.de/ case shows, excessively
automated "security" can be harmful, so I'd rather require the
user to explicitly request signature verification (when and if
he or she deems important) by default.
Post by Rich
But that does not exist at the moment.
There're only two things missing:

* IANA registration; per RFC 8288, that's straightforward
("The goal of the registry is to reflect common use of links
on the Internet. Therefore, the expert(s) should be strongly
biased towards approving registrations, unless they are
abusive, frivolous, not likely to be used on the Internet, or
actively harmful to the Internet and/or the Web (not merely
aesthetically displeasing or architecturally dubious).");

* actual adoption; that's a tad trickier.
Post by Rich
And its a bit of a chicken and egg now. No site would add the header
or the meta tag if zero browsers are going to verify it, and the
browser writers are not likely to add support for automatic
verification without enough sites providing signatures.
Why, thank you, now I have two more items in my todo list!
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Ivan Shmakov
2018-09-09 06:12:49 UTC
Reply
Permalink
Raw Message
[...]
Post by Ivan Shmakov
Post by Rich
Now, I don't mean that such could not be developed. A new http
header or a html meta element could carry a pgp signature of the
page across the connection to the browser for the browser to then
use to verify the signature.
That'd be <link rel="openpgp-signature" href="foo.xhtml.sig" />
HTML-wise. Alternatively, for HTTP/1.1: Link: <foo.xhtml.sig>;
rel="openpgp-signature".
Scratch that; the IANA registry [1] already has describedby,
which can be used for the purpose per RFC 6249; to quote:

This example shows a brief Metalink server response with OpenPGP
signature only:

Link: <http://example.com/example.ext.asc>; rel=describedby;
type="application/pgp-signature"

Metalink clients SHOULD support the use of OpenPGP signatures.

The HTML <link /> element would work just the same.

[1] http://iana.org/assignments/link-relations/link-relations.xhtml

[...]
Post by Ivan Shmakov
Post by Rich
And its a bit of a chicken and egg now. No site would add the
header or the meta tag if zero browsers are going to verify it, and
the browser writers are not likely to add support for automatic
verification without enough sites providing signatures.
Why, thank you, now I have two more items in my todo list!
(Minus half an item for me.)
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Rich
2018-09-08 16:59:42 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
To clarify: I would be fine with the browser defaulting to
"HTTPS only," allowing me to opt-out -- either for specific URIs,
or altogether. That the server administrators make a decision
for me is what I consider a problem.
In the current environment, the browsers are still fully functional at
both the http and https protocols.

The "you must use https" enforcement occurs at the webserver side,
where the server is configured to feed back to browsers using http a
HTTP redirect error code causing the browser to then retreive the https
version of the website.

So far, the browser makers have not changed the browsers to only use
https for web data. The browsers (at the moment) are more than willing
to use plain http. The web server is where the enforcement occurs for
now (however Google's on such a bender to convince everyone to switch
to https that I'd not be surprised to see another Chrome update at some
point in the future with a changelog entry of: "drop support for plain
http protocol").
Mike Spencer
2018-09-09 06:32:53 UTC
Reply
Permalink
Raw Message
I don't see why everyone else should suffer reduced security for their
benefit.
If the user's browser asks for HTTP, the server should serve that. If
the user's browser asks for HTTPS using a particular established
crypto protocol, the server should cooperate.

That's all easily negotiated transparently. The code already exists.
Do what the client user asks for. No other user suffers.
--
Mike Spencer Nova Scotia, Canada
Dan Purgert
2018-09-09 11:30:22 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
I don't see why everyone else should suffer reduced security for their
benefit.
If the user's browser asks for HTTP, the server should serve that. If
the user's browser asks for HTTPS using a particular established
crypto protocol, the server should cooperate.
Excepting instances where its generally considered that everything
should be encrypted (e.g. banking), in whick case the server redirects
http to https.

Or instances where https is not an option (e.g. my website, which is
just static HTML).
--
|_|O|_| Registered Linux user #585947
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: 05CA 9A50 3F2E 1335 4DC5 4AEE 8E11 DDF3 1279 A281
Mike Spencer
2018-09-09 19:22:13 UTC
Reply
Permalink
Raw Message
Post by Dan Purgert
Post by Mike Spencer
I don't see why everyone else should suffer reduced security for their
benefit.
If the user's browser asks for HTTP, the server should serve that. If
the user's browser asks for HTTPS using a particular established
crypto protocol, the server should cooperate.
Excepting instances where its generally considered that everything
should be encrypted (e.g. banking), in whick case the server redirects
http to https.
Yes. of course. Or redirects to an explanatory page with a simple
presentation of need for secure comms. I was keeping it simple,
re. the more general case.
Post by Dan Purgert
Or instances where https is not an option (e.g. my website, which is
just static HTML).
Same as above, mutatis mutandis, redirecting to an explanation that
crypto isn't avaiable but the content is available by HTTP. User can
accept that and re-request accordingly or reject it for h{is,er} own
resons.
--
Mike Spencer Nova Scotia, Canada
Rich
2018-09-08 16:53:37 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
Though I'm at loss on how browser developers can decide whether
any given site is HTTPS-only or not.
They don't. The decision to go HTTPS-only is up to each site.

Where the browser developers come in is they *heavily encourage* those
non-https sites to switch over by adding big, ugly, red, flashing, neon
icons next to the url saying "INSECURE:". This, of course, is clearly
meant to scare the hordes of users who have almost zero technical
knowledge, in the hopes that some number of those low-tech users then
pressure said site into "becoming secure" (to get rid of the big scary
warning).

So the browser developers are deciding to perform fearmongering, in
hopes of pushing those sites users into pressuring the sites to change
to https only.

But they (browser deveopers) are not unilaterally deciding that the
sites should be https only, and forcing such inside the browsers (at
least not yet, they certianly are in a position to do so, but so far
the fearmongering has been sufficient to get lots of sites to switch).
Mike Spencer
2018-09-09 06:47:10 UTC
Reply
Permalink
Raw Message
Post by Rich
Where the browser developers come in is they *heavily encourage* those
non-https sites to switch over by adding big, ugly, red, flashing, neon
icons next to the url saying "INSECURE:". This, of course, is clearly
meant to scare the hordes of users who have almost zero technical
knowledge, in the hopes that some number of those low-tech users then
pressure said site into "becoming secure" (to get rid of the big scary
warning).
Qvpxurnqf. Jeez, I haven't seen that yet. One more thing to defeat
when I get dragged snarling and spitting into moderny web browsers.

I'm very far from being an IT pro and I'm old enough to remember when
it was meaningfut to speak of "the computer" at Harvard. But I don't
need that kind of crap. Fire fox *used* to have a menu item to turn
js and images on/off (such a version is running right here now) but
they dropped it. More crap.
Post by Rich
So the browser developers are deciding to perform fearmongering, in
hopes of pushing those sites users into pressuring the sites to change
to https only.
But they (browser deveopers) are not unilaterally deciding that the
sites should be https only, and forcing such inside the browsers (at
least not yet, they certianly are in a position to do so, but so far
the fearmongering has been sufficient to get lots of sites to switch).
Qvpxurnqf. Oh, right, I already said that.
--
Mike Spencer Nova Scotia, Canada
Rich
2018-09-09 14:37:02 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
Post by Rich
Where the browser developers come in is they *heavily encourage* those
non-https sites to switch over by adding big, ugly, red, flashing, neon
icons next to the url saying "INSECURE:". This, of course, is clearly
meant to scare the hordes of users who have almost zero technical
knowledge, in the hopes that some number of those low-tech users then
pressure said site into "becoming secure" (to get rid of the big scary
warning).
Qvpxurnqf. Jeez, I haven't seen that yet. One more thing to defeat
when I get dragged snarling and spitting into moderny web browsers.
It is a 'feature' of more modern browsers, primarially driven by the
google chrome fools more than anyone else. But Firefox, as they have
been playing "ride chrome's coat tails" for a while, are likely going
to copy something similar into Firefox eventually. [1]
Post by Mike Spencer
I'm very far from being an IT pro and I'm old enough to remember when
it was meaningfut to speak of "the computer" at Harvard. But I don't
need that kind of crap. Fire fox *used* to have a menu item to turn
js and images on/off (such a version is running right here now) but
they dropped it. More crap.
Turning js on/off in Firefox moved into the NoScript extension. It
provides the identical global "on/off" of the old switch, plus the
ability to selectively turn on/off one by one individual bits until one
finds the minimal amount of js necessary for a given site to function.

Images on/off, hmm., I'm less sure if that one is still around in an
extension form anymore. A bit of searching turned up this info page:
https://www.ghacks.net/2015/03/24/how-to-turn-off-images-in-firefox/.
The old global block is still present, it has moved into the
about:config details and out of the easy checkbox of the preferences.
But there's also an ability to selectively block built in it seems. So
again, more powerful than the old simple "on/off" switch.


[1] At least at this moment, Firefox 62, with an http only site, has
only a light grey circle with a lower case "i" inside next to the url.
One has to click on the "circle i" to see details, Only inside the
"details" drop down does one see, in red text, "Connection is Not
Secure". Those of us with enough technical knowledge to understand the
differences know it simply means "was delivered over plain http". Joe
average, who considers facebook to be "the internet" see's that and
panics.

What the chrome team has done is move that "Connection is Not Secure"
message out of a secondary info window that one has to click upon to
find it and instead put it front and center right next to the url in
the url bar so that even Joe average can't miss it.
Mike Spencer
2018-09-09 19:42:12 UTC
Reply
Permalink
Raw Message
It ["Panic! Insecure!" popups] is a 'feature' of more modern
browsers, primarially driven by the google chrome fools more than
anyone else. But Firefox, as they have been playing "ride chrome's
coat tails" for a while, are likely going to copy something similar
into Firefox eventually. [1]
Did I hear a rumor that FF is also going to dick with DNS, somehow
running DNS though (only?) their own servers? Or something? Is that
a real thing? Will that break use of hosts file to fuvgpna
connections to unwanted hosts?
Images on/off, hmm., I'm less sure if that one is still around in an
https://www.ghacks.net/2015/03/24/how-to-turn-off-images-in-firefox/.
The old global block is still present, it has moved into the
about:config details and out of the easy checkbox of the
preferences.
Yes. The old method in Netscape such that, with "images off", a
placeholder icon was displayed; clicking on the icon would fetch it
and variously render it inline or alone. Erratically implemented in
older FFoxen. That worked very well.

Of course, sometimes what you need is most especially the image(s).
But news & opinion sites seem to think it's neccessary to have a pic,
often as not some random stock photo with every item. That's *before*
the matter of ads. Makes a substantial difference on a slow
connection.

I'm trying to ramp up a "never 10" Windoes user who wants to move to
Linux. More than enough novelty and detail for her without jigging in
and out of about:config.
But there's also an ability to selectively block built in it seems. So
again, more powerful than the old simple "on/off" switch.
[1] At least at this moment, Firefox 62, with an http only site, has
only a light grey circle with a lower case "i" inside next to the url.
One has to click on the "circle i" to see details, Only inside the
"details" drop down does one see, in red text, "Connection is Not
Secure". Those of us with enough technical knowledge to understand the
differences know it simply means "was delivered over plain http". Joe
average, who considers facebook to be "the internet" see's that and
panics.
What the chrome team has done is move that "Connection is Not Secure"
message out of a secondary info window that one has to click upon to
find it and instead put it front and center right next to the url in
the url bar so that even Joe average can't miss it.
--
Mike Spencer Nova Scotia, Canada
Eli the Bearded
2018-09-09 20:22:50 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
Did I hear a rumor that FF is also going to dick with DNS, somehow
running DNS though (only?) their own servers? Or something? Is that
a real thing? Will that break use of hosts file to fuvgpna
connections to unwanted hosts?
Firefox has an *optional, off by default* DNS over HTTPS for use by
people who have reason to doubt the integrity of their DNS system. If
you enable it, yes, the "hosts" file might be ignored. I have never
liked that method of blocking stuff, because I usually run a webserver
at localhost. So I turn to add-ons that let me block sites. The add-ons
are much better at dealing with wildcard blocks anyway (think
"*.doubleclick.*").
Post by Mike Spencer
Of course, sometimes what you need is most especially the image(s).
But news & opinion sites seem to think it's neccessary to have a pic,
often as not some random stock photo with every item. That's *before*
the matter of ads. Makes a substantial difference on a slow
connection.
lynx / elinks / w3m are your friends. Use them. All can be configured to
download and display images upon user request.

Elijah
------
lynx, though, does a terrible job with tabular data
Paul Sture
2018-09-09 20:30:30 UTC
Reply
Permalink
Raw Message
Post by Eli the Bearded
lynx / elinks / w3m are your friends. Use them. All can be configured to
download and display images upon user request.
lynx, though, does a terrible job with tabular data
I found that "links" does a better job then "lynx" with tabular data,
but I don't know whether it is still maintained.
--
We will not be enslaved through coercion, but by the lure of convenience.
Eli the Bearded
2018-09-10 18:27:10 UTC
Reply
Permalink
Raw Message
Post by Paul Sture
lynx / elinks / w3m are your friends. Use them. [...]
lynx, though, does a terrible job with tabular data
I found that "links" does a better job then "lynx" with tabular data,
but I don't know whether it is still maintained.
I did mention 'elinks', which is a successor project.

http://elinks.cz/

ELinks is an Open Source project covered by the GNU General Public
License. It originates from the Links project written by Mikulas
Patocka.

Elijah
------
generally uses elinks for better formatting and lynx for better keybinding
Ivan Shmakov
2018-09-10 02:55:23 UTC
Reply
Permalink
Raw Message
Post by Eli the Bearded
Post by Mike Spencer
Did I hear a rumor that FF is also going to dick with DNS, somehow
running DNS though (only?) their own servers? Or something? Is
that a real thing? Will that break use of hosts file to fuvgpna
connections to unwanted hosts?
Firefox has an *optional, off by default* DNS over HTTPS for use by
people who have reason to doubt the integrity of their DNS system.
To quote https://wiki.mozilla.org/Trusted_Recursive_Resolver:

TRR> Set `network.trr.mode` to 2 to make DNS Over HTTPS the browser's
TRR> first choice but use regular DNS as a fallback (0 is "off by
TRR> default", 1 lets Firefox pick whichever is faster, 3 for TRR only
TRR> mode, 5 to explicitly turn it off).

TRR> TRR is preffed OFF by default and you need to set a URI for an
TRR> available DOH "confirmation" domain name. This confirmation
TRR> domain is a pref by default set to "example.com". TRR will also
TRR> by default await the captive-portal detection to raise its green
TRR> flag before getting activated.

AFAICT, the above means that by default, TRR is set to "off by
default" state, /and/ will be activated when the "captive-portal
detection" (whatever that means) is triggered.

This is most certainly /not/ the true "opt-in," which would be
to use 5 ("turn TRR off") by default, and allow the user to change
the setting to 0 .. 3, whichever he or she finds suitable.

[...]
Post by Eli the Bearded
lynx, though, does a terrible job with tabular data
That kind of depends on the You may, for example, think that
data and what you want to do this example is a fine way to
with it once rendered. represent a two-cell table with a
sentence-worth of text in each cell,
but try to run it through a speech
synthesizer, and you may find Lynx
actually doing a somewhat better job.

(Copying text from such a table isn't necessarily straightforward,
either, especially with an editor lacking "rectangular" copy mode.)

FWIW, I tend to think that tables should be more or less reserved
for "simple" numeric data and such. Wrapping entire novels in
tables, HTML or otherwise, is something to avoid at all costs.

(Although I had experience with typesetting tables for the city's
"parks and recreation" committee c. 2002. Novel-sized, yes.)
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Mike Spencer
2018-09-10 07:16:20 UTC
Reply
Permalink
Raw Message
Post by Eli the Bearded
Post by Mike Spencer
Did I hear a rumor that FF is also going to dick with DNS, somehow
running DNS though (only?) their own servers? Or something? Is that
a real thing? Will that break use of hosts file to fuvgpna
connections to unwanted hosts?
Firefox has an *optional, off by default* DNS over HTTPS for use by
people who have reason to doubt the integrity of their DNS system. If
you enable it, yes, the "hosts" file might be ignored.
Ah, so. Thanks.
Post by Eli the Bearded
I have never liked that method of blocking stuff, because I usually
run a webserver at localhost. So I turn to add-ons that let me block
sites. The add-ons are much better at dealing with wildcard blocks
anyway (think "*.doubleclick.*").
Ooh. Yes, that's a shortcoming of /etc/hosts. at least AFAIUI.
Post by Eli the Bearded
Post by Mike Spencer
Of course, sometimes what you need is most especially the image(s).
But news & opinion sites seem to think it's neccessary to have a pic,
often as not some random stock photo with every item. That's *before*
the matter of ads. Makes a substantial difference on a slow
connection.
lynx / elinks / w3m are your friends. Use them. All can be configured to
download and display images upon user request.
Haven't looked at lynx for over a decade. Hmmmmm.... Huh. Cranky.
Have to do something about the hor'ble colors. Well, maybe, for some
purposes.
--
Mike Spencer Nova Scotia, Canada
Ivan Shmakov
2018-09-10 07:40:42 UTC
Reply
Permalink
Raw Message
[Cross-posting to news:comp.infosystems.www.misc. This time
for real.]

[...]
Post by Mike Spencer
Of course, sometimes what you need is most especially the image(s).
But news & opinion sites seem to think it's neccessary to have a
pic, often as not some random stock photo with every item. That's
*before* the matter of ads. Makes a substantial difference on a
slow connection.
lynx / elinks / w3m are your friends. Use them. All can be
configured to download and display images upon user request.
Haven't looked at lynx for over a decade. Hmmmmm.... Huh. Cranky.
Have to do something about the hor'ble colors.
This is what I did:

$ grep -E -- ^show_color < .lynxrc
show_color=never
$
Well, maybe, for some purposes.
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Computer Nerd Kev
2018-09-08 23:01:48 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
The protocols _should_ be up to date (who knows, maybe the person
using TLS 1.3 to view their TV guide actually has a good reason
for it), but when the information transmitted is not sensitive
people should no be forced to use this encryption.
Firstly, it?s not just about confidentiality; integrity matters too.
As a concrete example, I don?t want end users? ISPs inserting adverts
into pages from my website (or anyone else?s, but I have less control
over that).
Just like you might not want them running ad blockers, or Firefox
(least of all my preferred web browser, Dillo). You can design your
site not to work if the users disagree, but that's considered to be
poor web design as you should be allowing your users to make their
own choices about their browser software. I understand why this is
different for HTTPS on sites accepting sensitive information, but
I think for sites that do not, then it is the same.
Secondly, providing an unsecured option is just asking for downgrade
attacks.
If a website showing a TV guide suddenly one day decides to ask the
user for their name, date of birth, and [whatever primary ID sources
are most popular in your country], only a complete idiot would go
ahead and enter it. Such an idiot will be falling for every other
scam in the book as well, so they're a lost cause. Why then do I
now need HTTPS to load my TV guide (most recent conversion to
affect me, and also somehow won't load now in Dillo even with an
up to date OpenSSL library)?
I don?t think a niche interest in retrocomputing is sufficient to
justify compromising on these ?oints.
It's not just "retrocomputing", it's users with "smart" devices that
often loose software support much sooner than PCs, and not everybody
wants to, or can afford to, upgrade such hardware all the time in
order for it to still connect over HTTPS.
--
__ __
#_ < |\| |< _#
Huge
2018-09-09 08:57:33 UTC
Reply
Permalink
Raw Message
On 2018-09-08, Computer Nerd Kev <***@telling.you.invalid> wrote:

[23 lines snipped]
Post by Computer Nerd Kev
If a website showing a TV guide suddenly one day decides to ask the
user for their name, date of birth, and [whatever primary ID sources
are most popular in your country], only a complete idiot would go
ahead and enter it. Such an idiot will be falling for every other
scam in the book as well, so they're a lost cause. Why then do I
now need HTTPS to load my TV guide (most recent conversion to
affect me, and also somehow won't load now in Dillo even with an
up to date OpenSSL library)?
Arrogance is a common trait among computer "nerds".
--
Today is Boomtime, the 33rd day of Bureaucracy in the YOLD 3184
~ Stercus accidit ~
Rich
2018-09-09 14:46:17 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
If a website showing a TV guide suddenly one day decides to ask the
user for their name, date of birth, and [whatever primary ID sources
are most popular in your country], only a complete idiot would go
ahead and enter it.
You'd be *really* surprised if you were to do some A/B testing with
"average joe" computer users and the senario you outline above.

A significant percentage of the "average joe" computer users lack the
critical thinking ability to recognize the fact you outline [1] and
instead far too many of them would simply enter the information. You'd
up your "enter" percentage substantially if you make it one of those
css "cover the page, but the page shows through in light grey" boxes
and worded it with something to the effect of "to access your TV
listings, we first need a bit of information about you".

There is a reason why nigerian oil prince millionare scams work so
well. The vast majority of the "average joe's" could not sniff out a
scam if the scammer came out and said directly to them "this is a scam,
you are being scammed right now".

And the fact is, they are not actually "complete idiots". They just
lack the critical thinking skills necessary to recognize the change as
something nefarious. That and some of them operate from a "too
trusting" perspective where they think most others are trustworthy
instead of trying to find an angle to take advantage of them.





[1] This is a TV guide website; the last 2,000+ times I used it I was
not asked for my name/birthdate/primaryID values; why is it asking this
time? and why does it need this information?
Computer Nerd Kev
2018-09-09 23:45:01 UTC
Reply
Permalink
Raw Message
Post by Rich
Post by Computer Nerd Kev
If a website showing a TV guide suddenly one day decides to ask the
user for their name, date of birth, and [whatever primary ID sources
are most popular in your country], only a complete idiot would go
ahead and enter it.
You'd be *really* surprised if you were to do some A/B testing with
"average joe" computer users and the senario you outline above.
A significant percentage of the "average joe" computer users lack the
critical thinking ability to recognize the fact you outline [1] and
instead far too many of them would simply enter the information. You'd
up your "enter" percentage substantially if you make it one of those
css "cover the page, but the page shows through in light grey" boxes
and worded it with something to the effect of "to access your TV
listings, we first need a bit of information about you".
Given that many of these sites run ads via Google that are similarly
misleading and likely to be fallen for by anyone susceptable to the
above, I don't see why they should be inconveniencing users to stop
one unlikely method of "attack", while actively encouraging another
one that's much more common.
--
__ __
#_ < |\| |< _#
Rich
2018-09-10 01:41:28 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
Post by Rich
Post by Computer Nerd Kev
If a website showing a TV guide suddenly one day decides to ask the
user for their name, date of birth, and [whatever primary ID sources
are most popular in your country], only a complete idiot would go
ahead and enter it.
You'd be *really* surprised if you were to do some A/B testing with
"average joe" computer users and the senario you outline above.
A significant percentage of the "average joe" computer users lack the
critical thinking ability to recognize the fact you outline [1] and
instead far too many of them would simply enter the information. You'd
up your "enter" percentage substantially if you make it one of those
css "cover the page, but the page shows through in light grey" boxes
and worded it with something to the effect of "to access your TV
listings, we first need a bit of information about you".
Given that many of these sites run ads via Google that are similarly
misleading and likely to be fallen for by anyone susceptable to the
above, I don't see why they should be inconveniencing users to stop
one unlikely method of "attack", while actively encouraging another
one that's much more common.
Well, there is one reason I can think of for "why".

Keeping said ad revenue for themselves. Or making sure they are the
"only game in town to buy ad space from".

If joes-blog.com serves its data over http:, and uses googles ad
system, then Comcast can inject ads along side googles, and make money
separate from google, from the injecting.

But if joes-blog.com serves its data over https: only, and uses googles
ad system, then the only game in town to go buy ad space from becomes
google. So google gets to cut Comcast out and hoard the ad revenue
form themselves.
Eli the Bearded
2018-09-10 18:32:58 UTC
Reply
Permalink
Raw Message
Post by Rich
Well, there is one reason I can think of for "why".
Keeping said ad revenue for themselves. Or making sure they are the
"only game in town to buy ad space from".
If joes-blog.com serves its data over http:, and uses googles ad
system, then Comcast can inject ads along side googles, and make money
separate from google, from the injecting.
But if joes-blog.com serves its data over https: only, and uses googles
ad system, then the only game in town to go buy ad space from becomes
google. So google gets to cut Comcast out and hoard the ad revenue
form themselves.
DING-DING-DING-DING-DING-DING-DING-DING-DING-DING

WE HAVE A WINNER! And for the truely unscrupulous middleman[*] that wants
to insert ads in pages, DNS shenigans are the next frontier.

Elijah
------
[*] The ones that make Comcast seem ethical and friendly
Computer Nerd Kev
2018-09-11 06:44:35 UTC
Reply
Permalink
Raw Message
Post by Rich
Post by Computer Nerd Kev
Post by Rich
A significant percentage of the "average joe" computer users lack the
critical thinking ability to recognize the fact you outline [1] and
instead far too many of them would simply enter the information. You'd
up your "enter" percentage substantially if you make it one of those
css "cover the page, but the page shows through in light grey" boxes
and worded it with something to the effect of "to access your TV
listings, we first need a bit of information about you".
Given that many of these sites run ads via Google that are similarly
misleading and likely to be fallen for by anyone susceptable to the
above, I don't see why they should be inconveniencing users to stop
one unlikely method of "attack", while actively encouraging another
one that's much more common.
Well, there is one reason I can think of for "why".
Keeping said ad revenue for themselves. Or making sure they are the
"only game in town to buy ad space from".
If joes-blog.com serves its data over http:, and uses googles ad
system, then Comcast can inject ads along side googles, and make money
separate from google, from the injecting.
But if joes-blog.com serves its data over https: only, and uses googles
ad system, then the only game in town to go buy ad space from becomes
google. So google gets to cut Comcast out and hoard the ad revenue
form themselves.
That does make sense, given that Google is the main driver in the
move to HTTPS. I've never seen it in Australia, but given that
Wikipedia calls Comcast the largest ISP in the USA I can imagine
how Google would be scared of them setting any trends.
--
__ __
#_ < |\| |< _#
Paul Sture
2018-09-09 20:09:55 UTC
Reply
Permalink
Raw Message
Post by Richard Kettlewell
Post by Computer Nerd Kev
The protocols _should_ be up to date (who knows, maybe the person
using TLS 1.3 to view their TV guide actually has a good reason
for it), but when the information transmitted is not sensitive
people should no be forced to use this encryption.
Firstly, it’s not just about confidentiality; integrity matters too.
As a concrete example, I don’t want end users’ ISPs inserting adverts
into pages from my website (or anyone else’s, but I have less control
over that).
I didn't realise ISPs did this until recently, when I learned that
Comcast inject their own ads quite aggressively.
Post by Richard Kettlewell
Secondly, providing an unsecured option is just asking for downgrade
attacks.
I don’t think a niche interest in retrocomputing is sufficient to
justify compromising on these þoints.
I didn't realise until I saw the following Troy Hunt Youtube video, that
plain http could represent so many dangers for the end user.

My summary of the video's main points:

Troy Hunt's Video


- ISPs such as Comcast aggressively injecting ads
- Hotels injecting ads
- Governments e,g, Egypt, injecting ads and cryptomining scritps for
monetary gain
- The Wifi Pineapple device. I've played with one of these myself and it's
surprising what you can see.
- Static web site being hijacked
- inserting image and sound content
- cryptomining
- attempted router hijack
- DNS Hijacking
- DDoS on GitHub via injected scripts (China's Great Cannon), a risk to
the broader internet
- Sending phishing pages to the browser to steal credentials
- Software (e.g. Flash) update prompts pointing to malware downloads
- Fake copies of login pages, e.g. Google Mail. Who except the
tech-savvy actually looks at the URL when presented with a
familiar login page?

Summary:

Using https is not to protect you, the owner of your website, but to
protect users visiting your website.


These is a caveat to the video. Hunt plugs his httpsiseasy.com site,
but when you look at that, you are handing control over to someone else,
(Cloudflare) for convenience. It may be a free service (for the
moment), but you are handing all your traffic to Cloudflare. Those of
us outside the US might not wish to do that, and there could be GDPR
implications as well.
--
We will not be enslaved through coercion, but by the lure of convenience.
Rich
2018-09-09 22:00:56 UTC
Reply
Permalink
Raw Message
Post by Paul Sture
Post by Computer Nerd Kev
The protocols _should_ be up to date (who knows, maybe the person
using TLS 1.3 to view their TV guide actually has a good reason
for it), but when the information transmitted is not sensitive
people should no be forced to use this encryption.
Firstly, it?s not just about confidentiality; integrity matters too.
As a concrete example, I don?t want end users? ISPs inserting adverts
into pages from my website (or anyone else?s, but I have less control
over that).
I didn't realise ISPs did this until recently, when I learned that
Comcast inject their own ads quite aggressively.
I don?t think a niche interest in retrocomputing is sufficient to
justify compromising on these þoints.
I didn't realise until I saw the following Troy Hunt Youtube video, that
plain http could represent so many dangers for the end user.
Troy Hunt's Video
http://youtu.be/_BNIkw4Ao9w
- ISPs such as Comcast aggressively injecting ads
- Hotels injecting ads
- Governments e,g, Egypt, injecting ads and cryptomining scritps for
monetary gain
...
And there is also the Upside-Down-Ternet (which was someone having fun,
but it shows more of what is possible when there's no protection on the
network channel):

http://www.ex-parrot.com/pete/upside-down-ternet.html
Computer Nerd Kev
2018-09-10 00:02:38 UTC
Reply
Permalink
Raw Message
Post by Paul Sture
- ISPs such as Comcast aggressively injecting ads
- Hotels injecting ads
- Governments e,g, Egypt, injecting ads and cryptomining scritps for
monetary gain
- The Wifi Pineapple device. I've played with one of these myself and it's
surprising what you can see.
- Static web site being hijacked
- inserting image and sound content
- cryptomining
- attempted router hijack
- DNS Hijacking
- DDoS on GitHub via injected scripts (China's Great Cannon), a risk to
the broader internet
- Sending phishing pages to the browser to steal credentials
- Software (e.g. Flash) update prompts pointing to malware downloads
- Fake copies of login pages, e.g. Google Mail. Who except the
tech-savvy actually looks at the URL when presented with a
familiar login page?
Using https is not to protect you, the owner of your website, but to
protect users visiting your website.
I don't like how Chrome spys on users. Does that mean that I should
set my websites to deny access to Chrome users? No, it means that I
should make sure that my site works well with other browsers so that
users who are concerned have the choice of switching to software
that I think is better.

In this case, I set my websites to accept HTTPS connections. If
users are afraid of the above issues, they can set their browser
to only connect on HTTPS. If most users want to do this (or it
is seen that they should), browser authors can set connecting
on HTTPS as default.

However just like some people may want/need to still use Chrome,
if some people want/need to connect on unencrypted HTTP, they
can still do so.
--
__ __
#_ < |\| |< _#
Richard Kettlewell
2018-09-10 08:46:58 UTC
Reply
Permalink
Raw Message
Paul Sture <***@sture.ch> writes:
[...]
Post by Paul Sture
Using https is not to protect you, the owner of your website, but to
protect users visiting your website.
Quite.
Post by Paul Sture
These is a caveat to the video. Hunt plugs his httpsiseasy.com site,
but when you look at that, you are handing control over to someone else,
(Cloudflare) for convenience. It may be a free service (for the
moment), but you are handing all your traffic to Cloudflare. Those of
us outside the US might not wish to do that, and there could be GDPR
implications as well.
https is pretty easy without a CDN being involved too l-)
--
https://www.greenend.org.uk/rjk/
Marko Rauhamaa
2018-09-10 10:08:29 UTC
Reply
Permalink
Raw Message
Post by Paul Sture
Using https is not to protect you, the owner of your website, but to
protect users visiting your website.
I have strong doubts about the level of protection. Any self-respecting
government or criminal outfit can present a valid certificate for any
domain.

The whole concept of the "chain of trust" is flawed. Trust isn't
transitive; and I wouldn't trust even the root certificate authorities.

Essentially, it's a protection racket.


Marko
Dan Purgert
2018-09-10 11:20:01 UTC
Reply
Permalink
Raw Message
Post by Marko Rauhamaa
Post by Paul Sture
Using https is not to protect you, the owner of your website, but to
protect users visiting your website.
I have strong doubts about the level of protection. Any self-respecting
government or criminal outfit can present a valid certificate for any
domain.
The whole concept of the "chain of trust" is flawed. Trust isn't
transitive; and I wouldn't trust even the root certificate authorities.
Essentially, it's a protection racket.
Definitely, but it does have the benefit of being "trustworthy enough"
for the average person; and I think it ultimately comes down to "will
the average person be able to use it".
--
|_|O|_| Registered Linux user #585947
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: 05CA 9A50 3F2E 1335 4DC5 4AEE 8E11 DDF3 1279 A281
Huge
2018-09-10 16:21:37 UTC
Reply
Permalink
Raw Message
Post by Dan Purgert
Post by Marko Rauhamaa
Post by Paul Sture
Using https is not to protect you, the owner of your website, but to
protect users visiting your website.
I have strong doubts about the level of protection. Any self-respecting
government or criminal outfit can present a valid certificate for any
domain.
The whole concept of the "chain of trust" is flawed. Trust isn't
transitive; and I wouldn't trust even the root certificate authorities.
Essentially, it's a protection racket.
Definitely, but it does have the benefit of being "trustworthy enough"
for the average person; and I think it ultimately comes down to "will
the average person be able to use it".
It *always* comes down to "will the average person be able to use it". And
unless it's transparent and compulsory, they won't.
--
Today is Pungenday, the 34th day of Bureaucracy in the YOLD 3184
~ Stercus accidit ~
Paul Sture
2018-09-10 13:49:35 UTC
Reply
Permalink
Raw Message
Post by Marko Rauhamaa
Post by Paul Sture
Using https is not to protect you, the owner of your website, but to
protect users visiting your website.
I have strong doubts about the level of protection. Any self-respecting
government or criminal outfit can present a valid certificate for any
domain.
True.
Post by Marko Rauhamaa
The whole concept of the "chain of trust" is flawed. Trust isn't
transitive; and I wouldn't trust even the root certificate authorities.
Essentially, it's a protection racket.
It is indeed. When Google announced that they would demote plain http
sites in their rankings I looked up the price of certificates with an
ISP I used until 2013, and the cheapest came in at something like USD 70
per annum. If you are running several sites that can easily add up to
more than you are paying an ISP for hosting those sites.

It's easier for the average blogger to opt for something like
blogspot.com which is, er, run by Google. Oh what a coincidence...
--
We will not be enslaved through coercion, but by the lure of convenience.
Rich
2018-09-10 16:21:14 UTC
Reply
Permalink
Raw Message
Post by Paul Sture
Post by Marko Rauhamaa
Post by Paul Sture
Using https is not to protect you, the owner of your website, but to
protect users visiting your website.
I have strong doubts about the level of protection. Any self-respecting
government or criminal outfit can present a valid certificate for any
domain.
True.
Post by Marko Rauhamaa
The whole concept of the "chain of trust" is flawed. Trust isn't
transitive; and I wouldn't trust even the root certificate authorities.
Essentially, it's a protection racket.
It is indeed. When Google announced that they would demote plain http
sites in their rankings I looked up the price of certificates with an
ISP I used until 2013, and the cheapest came in at something like USD 70
per annum. If you are running several sites that can easily add up to
more than you are paying an ISP for hosting those sites.
It's easier for the average blogger to opt for something like
blogspot.com which is, er, run by Google. Oh what a coincidence...
https://letsencrypt.org/ $0.00 cost

The one downside is the certs last only 3 months (but that is
purposeful, as there are numerous tools that automate the renewal
process for you, which is their intent, encourage folks to automate the
renewal and replacement process for the certs).

Trusted by all the major browsers now for some time (at least if anyone
is running a reasonably recent version. Someone running NS 4.71 will
of course not have their root in their cert store.
Mike Spencer
2018-09-12 06:53:35 UTC
Reply
Permalink
Raw Message
Post by Rich
Trusted by all the major browsers now for some time (at least if anyone
is running a reasonably recent version. Someone running NS 4.71 will
of course not have their root in their cert store.
Another bludgeon. You have to upgrade your browser in order to keep
up to date your collection of certs. Or is there a way to manually
edit that binary certs.db file (in FF) to add or delete items?

The Unix way: Here's a datum I need to add to a config file; here's
one in the file I want to delete. Text editor -> done.

FWIW, NS 4.71 (or NN 4.76) doesn't do *any* presently on-line
crypto/HTTPS.

I do exactly one thing on-line involving money and I try very hard to
ensure that totaal failure of security will not possibly cost me more
than I'm prepared to lose.
--
Mike Spencer Nova Scotia, Canada
Andy Burns
2018-09-12 07:12:58 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
Post by Rich
Someone running NS 4.71 will
of course not have their root in their cert store.
Another bludgeon. You have to upgrade your browser in order to keep
up to date your collection of certs. Or is there a way to manually
edit that binary certs.db file (in FF) to add or delete items?
I won't claim to remember anything about editing certificate stores in
Netscape, but according to Oracle it doesn't sound very different from
current Firefox

<https://docs.oracle.com/cd/E19957-01/817-3331/6miuccqob/index.html>

You can download Let's Encrypt's root and intermediate certs from here

<https://letsencrypt.org/certificates>
Dan Purgert
2018-09-12 10:27:39 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
Post by Rich
Trusted by all the major browsers now for some time (at least if anyone
is running a reasonably recent version. Someone running NS 4.71 will
of course not have their root in their cert store.
Another bludgeon. You have to upgrade your browser in order to keep
up to date your collection of certs. Or is there a way to manually
edit that binary certs.db file (in FF) to add or delete items?
I think certs.db is (or, well, was at one time) the file where
user-installed certs (e.g. a self-signed CA) got stuck, assuming the
"user" wasn't also able to use sudo (or su root) to install it
system-wide.

So "manually editing" it is somethig like preferences -> advanced ->
cert store -> 'import cert' button from within Firefox.
Post by Mike Spencer
The Unix way: Here's a datum I need to add to a config file; here's
one in the file I want to delete. Text editor -> done.
FF "should(tm)" respect global certs in /etc/ssl.

However, both (either) of those won't fix an old version not supporting
TLS1.0+
--
|_|O|_| Registered Linux user #585947
|_|_|O| Github: https://github.com/dpurgert
|O|O|O| PGP: 05CA 9A50 3F2E 1335 4DC5 4AEE 8E11 DDF3 1279 A281
Rich
2018-09-12 11:30:21 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
Post by Rich
Trusted by all the major browsers now for some time (at least if anyone
is running a reasonably recent version. Someone running NS 4.71 will
of course not have their root in their cert store.
Another bludgeon. You have to upgrade your browser in order to keep
up to date your collection of certs. Or is there a way to manually
edit that binary certs.db file (in FF) to add or delete items?
The Unix way: Here's a datum I need to add to a config file; here's
one in the file I want to delete. Text editor -> done.
FWIW, NS 4.71 (or NN 4.76) doesn't do *any* presently on-line
crypto/HTTPS.
I do exactly one thing on-line involving money and I try very hard to
ensure that totaal failure of security will not possibly cost me more
than I'm prepared to lose.
Even old NS 4.71 would let you install custom certs (if memory serves,
the UI was clunky, ugly, and difficult to understand if one was not
already a "TLS expert" of sorts, but it was there).

The big reason why newer certs won't work with NS 4.71 anymore is that
TLS has moved on, and old NS 4.71 does not have the code to understand
newer features (newer hashes, newer crypto algorithms, etc.). So for a
new cert using SHA3, NS 4.71 has no hope of ever importing it because
NS 4.71 has no SHA3 algorithm built in.

I.e., NS 4.71 is running V1 code, and the world has moved on to V7
code, and old V1 code does not understand the data produced/consumed by
V7 code. (Note, version number are made up.) And new V7 code now actively
disavows that old V1-5 data ever existed in the past and also refuses to
consume V1-5 data sets.
Andy Burns
2018-09-12 11:51:13 UTC
Reply
Permalink
Raw Message
Post by Rich
The big reason why newer certs won't work with NS 4.71 anymore is that
TLS has moved on
Seems NS7.1 is the oldest to support SHA-256, hasn't SHA-1 been dead for
over a year as far as still-supported browsers, and hence certs served
by most webservers are concerned?
Rich
2018-09-12 12:27:59 UTC
Reply
Permalink
Raw Message
Post by Andy Burns
Post by Rich
The big reason why newer certs won't work with NS 4.71 anymore is that
TLS has moved on
Seems NS7.1 is the oldest to support SHA-256, hasn't SHA-1 been dead for
over a year as far as still-supported browsers, and hence certs served
by most webservers are concerned?
Yes, but the point was that even if Mike found the UI in NS4.71 to add
certs, and tried to add the modern certs, he still could not add them
because old 4.71 will not understand them.

His choices are rapidly diminishing to:

1) browse to fewer and fewer websites

2) upgrade to a newer browser (and install NoScript to bring JS back
under control)

3) actually put in the effort to build a https man-in-the-middle
proxy for himself from the building blocks that are available, so
his NS4.71 can continue to think it is browsing http: sites, while
the proxy deamon is actually browsing to the https: version

For #3, there is no turn-key, drop in, ready made, solution, which
seems to be what he wants someone to point him towards from his
occasional posts along this line. He's going to have to build #3
himself from the existing bits of tls lego bricks that do exist.
Computer Nerd Kev
2018-09-12 23:01:43 UTC
Reply
Permalink
Raw Message
Post by Rich
Post by Andy Burns
Post by Rich
The big reason why newer certs won't work with NS 4.71 anymore is that
TLS has moved on
Seems NS7.1 is the oldest to support SHA-256, hasn't SHA-1 been dead for
over a year as far as still-supported browsers, and hence certs served
by most webservers are concerned?
Yes, but the point was that even if Mike found the UI in NS4.71 to add
certs, and tried to add the modern certs, he still could not add them
because old 4.71 will not understand them.
1) browse to fewer and fewer websites
2) upgrade to a newer browser (and install NoScript to bring JS back
under control)
3) actually put in the effort to build a https man-in-the-middle
proxy for himself from the building blocks that are available, so
his NS4.71 can continue to think it is browsing http: sites, while
the proxy deamon is actually browsing to the https: version
For #3, there is no turn-key, drop in, ready made, solution, which
seems to be what he wants someone to point him towards from his
occasional posts along this line. He's going to have to build #3
himself from the existing bits of tls lego bricks that do exist.
Or do what I've been doing for the sites that my old (and not so
old) browsers won't connect to anymore and use one of the HTTPS
to HTTP web proxies available online via a web interface.

With Firefox V. 2, manually adding new certificates from the latest
Debian ca-certificates package did prevent the "invalid certificate"
prompts. However many of the certificates triggered another bug that
prevented pages loading entirely (probably the version issue that was
mentioned earlier), so in the end the only cert. that I managed to
keep in there was the one used by Wikipedia (others probably do work,
but the testing them one-by-one was too painful).

I can use a recent Firefox version, so this isn't such a severe
issue for me.
--
__ __
#_ < |\| |< _#
Computer Nerd Kev
2018-09-12 23:06:35 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
I do exactly one thing on-line involving money and I try very hard to
ensure that totaal failure of security will not possibly cost me more
than I'm prepared to lose.
That's my approach as well, though I use two services (neither of
which has immediate access to my bank accout, or the other).
--
__ __
#_ < |\| |< _#
Huge
2018-09-10 16:23:11 UTC
Reply
Permalink
Raw Message
Post by Paul Sture
Post by Marko Rauhamaa
Post by Paul Sture
Using https is not to protect you, the owner of your website, but to
protect users visiting your website.
I have strong doubts about the level of protection. Any self-respecting
government or criminal outfit can present a valid certificate for any
domain.
True.
Post by Marko Rauhamaa
The whole concept of the "chain of trust" is flawed. Trust isn't
transitive; and I wouldn't trust even the root certificate authorities.
Essentially, it's a protection racket.
It is indeed. When Google announced that they would demote plain http
sites in their rankings I looked up the price of certificates with an
ISP I used until 2013, and the cheapest came in at something like USD 70
per annum. If you are running several sites that can easily add up to
more than you are paying an ISP for hosting those sites.
OTOH;

https://letsencrypt.org/
--
Today is Pungenday, the 34th day of Bureaucracy in the YOLD 3184
~ Stercus accidit ~
Mike Spencer
2018-09-08 06:09:41 UTC
Reply
Permalink
Raw Message
Post by Eli the Bearded
Post by Mike Spencer
The trick is to concoct a filter that encounters every web page
*before* it reaches the HTML parser, DOM thingy, whatever.
[snip snip]
Sadly, I don't think I'm a wizardly enough hacker to implement that
for HTTPS but I'll soon have to give it a try when I finally abandon
Netscape 4.76 (because everybody, including cartoons, insists on using
the latest crypto). Maybe with a little help from various places on
Usenet.
I posted about this problem of yours in the slackware group.
Yes, you did, tnx. Grumpy old man that I am, I've grumped in more
than one group. :-) See "a little help", supra. All suggestions I've
received have been saved for reference. RSN, I'll have to both figure
out a way to get off dialup and move to a much newer browser & Linux
kernel. Just haven't worked up the energy to beat it all up yet. But
it irritates me that I'm going to have to tediously build the defences
from bumpf that I already have in NN.
Post by Eli the Bearded
Date: Fri, 6 Jul 2018 03:01:33
Subject: Re: Slackware's future
Post by Mike Spencer
Why do I want NN 4.76? It allows me easily to turn off js and
[...]
Payment Card Industry Data Security Standard ("PCI Compliance")
requires sites stop accepting older, now easily broken, SSL / TLS
versions. Anything older than TLS 1.2 should be refused as of July
2018 by any company that deals with credit cards, or any company
forced into PCI compliance by dint of other companies they interact
with.
You might want to look into running some sort of HTTPS-to-HTTP
endpoint proxy in front of your browser instead. Technically it is
quite feasible, if a tad complicated, but I can't think of one that
exists. You might be able to get a MITM proxy (eg mitmproxy) to do
it for you, or at least to talk to the sites in secure TLS and your
browser in insecure SSL.
One of the difficulties with something that runs independently to do
the actual fetch from the net is that it may be seen as
localhost://whatever, requiring that various URLs in the doc may need
to be modified before handover to the browser. I already have some
cgi-bin scripts that do this, one of which is (from my humble
perspective) a real work of deviant art. Fetches the whole page into a
Perl var and then makes maybe 25 separate changes -- substitutions,
deletions, additions.
--
Mike Spencer Nova Scotia, Canada
Rich
2018-09-08 17:09:30 UTC
Reply
Permalink
Raw Message
Post by Mike Spencer
One of the difficulties with something that runs independently to do
the actual fetch from the net is that it may be seen as
localhost://whatever, requiring that various URLs in the doc may need
to be modified before handover to the browser. I already have some
cgi-bin scripts that do this, one of which is (from my humble
perspective) a real work of deviant art. Fetches the whole page into
a Perl var and then makes maybe 25 separate changes -- substitutions,
deletions, additions.
You've been pointed in this direction a few times. What you want is a
standard http protocol item that's already handled by browsers. It is
called a "proxy".

You just want a proxy that talks https to the web, but http (or it
could be https if you wanted the extra complexity) from itself runnin
locally and your local browser.

Unfortunately, proxy's that are https to the net, but http back to the
local browser, and that also perform MITM filtering for you don't exist
as drop in items, so you've got to roll your own here.

But, with a proxy, the url's don't need to be rewritten. The url's
remain untouched because of the way the proxy protocol works.
Ivan Shmakov
2018-09-08 18:00:55 UTC
Reply
Permalink
Raw Message
[...]
Post by Rich
You just want a proxy that talks https to the web, but http (or it
could be https if you wanted the extra complexity) from itself runnin
locally and your local browser.
Unfortunately, proxy's that are https to the net, but http back to
the local browser, and that also perform MITM filtering for you don't
exist as drop in items, so you've got to roll your own here.
I believe that Sslstrip, Mitmproxy and Sslsplit (I know I've
mentioned them a few times already) may happen to do exactly that.
Post by Rich
But, with a proxy, the url's don't need to be rewritten. The url's
remain untouched because of the way the proxy protocol works.
Yes.
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
Eli the Bearded
2018-09-08 21:26:02 UTC
Reply
Permalink
Raw Message
Post by Rich
Post by Mike Spencer
One of the difficulties with something that runs independently to do
the actual fetch from the net is that it may be seen as
localhost://whatever, requiring that various URLs in the doc may need
to be modified before handover to the browser. I already have some
cgi-bin scripts that do this, one of which is (from my humble
perspective) a real work of deviant art. Fetches the whole page into
a Perl var and then makes maybe 25 separate changes -- substitutions,
deletions, additions.
You've been pointed in this direction a few times. What you want is a
standard http protocol item that's already handled by browsers. It is
called a "proxy".
You just want a proxy that talks https to the web, but http (or it
could be https if you wanted the extra complexity) from itself runnin
locally and your local browser.
And that's what I pointed him to: mitmproxy which decrypts and
reencrypts with a new cert. The configuration he would need to do is
just get the reencrypt part to be some old SSL standard which his
Netscape 4.7 can understand. I think it is fully doable, but I don't
know how much work it would be.

Elijah
------
used Netscape 4.x for a long time, but not nearly that long
Rich
2018-09-08 21:38:39 UTC
Reply
Permalink
Raw Message
Post by Eli the Bearded
Post by Rich
Post by Mike Spencer
One of the difficulties with something that runs independently to do
the actual fetch from the net is that it may be seen as
localhost://whatever, requiring that various URLs in the doc may need
to be modified before handover to the browser. I already have some
cgi-bin scripts that do this, one of which is (from my humble
perspective) a real work of deviant art. Fetches the whole page into
a Perl var and then makes maybe 25 separate changes -- substitutions,
deletions, additions.
You've been pointed in this direction a few times. What you want is a
standard http protocol item that's already handled by browsers. It is
called a "proxy".
You just want a proxy that talks https to the web, but http (or it
could be https if you wanted the extra complexity) from itself runnin
locally and your local browser.
And that's what I pointed him to: mitmproxy which decrypts and
reencrypts with a new cert. The configuration he would need to do is
just get the reencrypt part to be some old SSL standard which his
Netscape 4.7 can understand.
Yep. That should do it for him.
Post by Eli the Bearded
I think it is fully doable, but I don't know how much work it would
be.
I also think it is fully doable, but I also feel he's looking for a
"drop-in" turn-key solution. As we've said, there's a bunch of pieces
that appear to be capable of doing what he wants, but there is "some
assembly required". How large is "some" we don't know. He will need
to put in the effort to perform the assembly.
Oregonian Haruspex
2018-09-11 07:54:06 UTC
Reply
Permalink
Raw Message
Obviously the solution is for each ‘site’ to become an ‘app’ so we can
finally kill the web once and for all. What did the unwashed masses bring
to the Internet? Nothing good. Once they bugger off we will all be happier.
Loading...