Discussion:
[Link Posting] History of Gopher
Add Reply
Rich
2018-08-27 12:05:18 UTC
Reply
Permalink
Raw Message
####################################################################
# ATTENTION: This post is a reference to a website. The poster of #
# this Usenet article is not the author of the referenced website. #
####################################################################

<URL:https://prgmr.com/blog/gopher/2018/08/23/gopher.html>
For many people, the world wide web is synonymous with the Internet.
While the HTTP protocol dominates the modern internet, many protocols
obsolete, obscure and well known make up the Internet.
One of the more stubborn protocols is Gopher. Introduced in 1991 (the
same year as HTTP), Gopher, like the web, is document-centric.
By about 1990, information on the internet was expanding rapidly enough
that it needed more organization and a better search capability. In 1991
researchers at the University of Minnesota developed the Gopher protocol
in an attempt to provide some of that organization. Gopher provides a
hierarchical text-based menu system to organize the contents of a data
repository (which eventually came to be called "gopherholes").
Soon after, the search capability came in the form of a new search
engine called Veronica. It was a whimsical time on the net, and geeks
still ruled most of it, so not only was the name taken from Archie
comics, it was soon turned into a backronym as "Very Easy
Rodent-Oriented Net-wide Index to Computer Archives". Veronica was
something of a brute-force approach. It used a dynamically updated
database of every file and every hierarchy on every Gopher server on the
internet.
Veronica was eventually joined by an alternative search tool named
Jughead. (Whimsical, remember?) Jughead differed from Veronica in that
it did not use a large and expanding database, but on the other hand you
had to specify which Gopher server you wanted to search.
Clearly there was a problem with scaling here, and that is part of what
led to Gopher's eventual decline as the internet kept expanding. Other
threats came from the sheer versatility of HTML and HTTP, the rise of
universal text-based searching, and the eventual decision by the
University of Minnesota to charge licensing fees for the use of their
software. Gopher was wildly popular for a few years, but by about 1996
it had fallen far behind the new browser-based web.
...
Computer Nerd Kev
2018-08-27 22:38:15 UTC
Reply
Permalink
Raw Message
Post by Rich
<URL:https://prgmr.com/blog/gopher/2018/08/23/gopher.html>
Yes every so often I run across a bookmark or reference about Gopher
and decide to jump down the hole to see what's changed. My latest
discovery, during a visit a few weeks ago, was that I can now view
a local (almost) weather forecast via Gopher. So, surprisingly
enough, that's one of my core internet applications served without
HTTP.

One of these days I'll probably get really enthused and set up
my own site on the Floodgap server.
--
__ __
#_ < |\| |< _#
Ivan Shmakov
2018-09-17 12:37:13 UTC
Reply
Permalink
Raw Message
Post by Computer Nerd Kev
Post by Rich
<URL:https://prgmr.com/blog/gopher/2018/08/23/gopher.html>
Yes every so often I run across a bookmark or reference about Gopher
and decide to jump down the hole to see what's changed. My latest
discovery, during a visit a few weeks ago, was that I can now view a
local (almost) weather forecast via Gopher. So, surprisingly enough,
that's one of my core internet applications served without HTTP.
It's also possible to get a weather forecast via HTTP but without
a /browser/ -- thanks to the GFS data availability via OPeNDAP.

Suppose we're, for some reason, interested in Moscow, ID. As of
this writing, [1] points to [2] as the latest available forecast.
Now, I have GrADS [3] installed with NetCDF and OPeNDAP support
linked in, which I can use as follows. (The place coordinates
are per [4].)

$ grads -lb
...

Select "plain text" output, for easier inclusion into this post
(although GrADS is perfectly capable of graphical output as well):

ga-> set gxout print

Select dataset:

ga-> sdfopen http://nomads.ncep.noaa.gov:9090/dods/gfs_0p25/gfs20180917/gfs_0p25_06z

Select whole range alongside the time axis (use > query file to
find out the limits; in this case, that'd be 2018-09-17 6:00 UTC
through 2018-09-27 6:00 UTC -- at 3 hour intervals):

ga-> set t 1, 81
Time values set: 2018:9:17:6 2018:9:27:6

Select specific spatial point:

ga-> set lat 46.73239
LAT set to 46.75 46.75
ga-> set lon -117.00017
LON set to -117 -117

Get surface temperature in degrees Celsius (slightly reformatted
for readability and time line added):

ga-> d tmpsfc - 273.15
Printing Grid -- 81 Values -- Undef = -9.99e+08
## 6 UTC 9 UTC 12 UTC 15 UTC 18 UTC 21 UTC | 0 UTC 3 UTC
9.55297 6.08755 4.82278 10.6917 27.3735 26.1931 19.8854 10.97550
9.14181 8.35006 6.98281 11.371 24.059 27.6129 21.7885 10.37740
7.70101 5.95001 4.50851 10.1745 22.4721 25.412 20.4294 9.93920
7.09545 6.15844 4.81725 9.84997 17.8827 24.2827 19.4155 12.88270
9.85925 8.44998 7.20385 11.25 24.9904 30.8578 24.9034 16.60550
15.15 12.7307 10.05 11.6676 22.8354 25.9256 19.4852 11.81990
8.0111 7.16747 7.74056 10.7815 22.1291 24.7711 18.8724 8.46496
6.21264 4.85433 4.05776 9.56045 19.4621 22.3871 17.0242 7.11572
5.12148 4.13537 4.74999 8.52865 25.9834 23.3823 17.8179 8.41931
6.24999 4.63085 3.15554 7.76916 19.2207 22.3853 17.2327 6.69464
5.28381
ga->

[1] http://nomads.ncep.noaa.gov:9090/dods/gfs_0p25/gfs20180917
[2] http://nomads.ncep.noaa.gov:9090/dods/gfs_0p25/gfs20180917/gfs_0p25_06z
[3] http://packages.debian.org/stretch/grads
[4] http://download.geonames.org/export/dump/cities1000.zip

[...]
--
FSF associate member #7257 http://am-1.org/~ivan/
Eli the Bearded
2018-08-27 22:54:30 UTC
Reply
Permalink
Raw Message
Post by Rich
<URL:https://prgmr.com/blog/gopher/2018/08/23/gopher.html>
Soon after, the search capability came in the form of a new search
engine called Veronica. It was a whimsical time on the net, and geeks
still ruled most of it, so not only was the name taken from Archie
comics, it was soon turned into a backronym as "Very Easy
Rodent-Oriented Net-wide Index to Computer Archives". Veronica was
something of a brute-force approach. It used a dynamically updated
database of every file and every hierarchy on every Gopher server on the
internet.
The missing bit there is "Veronica" was chosen as a reference to Archie
comics because "Archie" was already in use as the name of a search
engine for FTP sites. (How do you use a search engine for FTP sites?
Email. You sent a query by email and you got a result sometime later in
a response email.)

If you want to go-fer something, lynx still supports the protocol. Get
started with:

lynx gopher://gopherproject.org

It's a strightforward UI for anyone who has used lynx before.

Elijah
------
or proxied to https at https://gopherproxy.meulie.net/gopherproject.org/
Michael Black
2018-08-28 02:54:31 UTC
Reply
Permalink
Raw Message
Post by Eli the Bearded
Post by Rich
<URL:https://prgmr.com/blog/gopher/2018/08/23/gopher.html>
Soon after, the search capability came in the form of a new search
engine called Veronica. It was a whimsical time on the net, and geeks
still ruled most of it, so not only was the name taken from Archie
comics, it was soon turned into a backronym as "Very Easy
Rodent-Oriented Net-wide Index to Computer Archives". Veronica was
something of a brute-force approach. It used a dynamically updated
database of every file and every hierarchy on every Gopher server on the
internet.
The missing bit there is "Veronica" was chosen as a reference to Archie
comics because "Archie" was already in use as the name of a search
engine for FTP sites. (How do you use a search engine for FTP sites?
Email. You sent a query by email and you got a result sometime later in
a response email.)
I always have to check, since I do associate the names with the comic
book, but one of them isn't like the others.

Archie was a local creation, at McGill here. One of the people involved
was Peter Deutsch, not to be confused with the one (I think the name a bit
different) in "Hackers". About 1996 the local paper ran an article about
him, he is credited with bringing the internet to Montreal said the
article, though I'm not sure if that was just McGill (where he was) or
overall. But McGill long had a classified ad section on their webpage,
and "forever" it was a gopher server. Maybe about 2000, maybe it was
later, they finally dropped it, and oddly it became less useful as an html
page. Though that's likely just that it let non-McGill people post.

About 1999 Peter Deutsch made an appearance in the local newsgroup,
someone was coming from out of town and needed the cross street for an
address, and Peter answered with the correct answer well down the thread.
It was the only time I saw a post from him, so maybe someone had pointed
the thread to him.

Michael
Sehnsucht
2018-09-01 01:25:55 UTC
Reply
Permalink
Raw Message
Thanks, that was a quick yet nice recap. Gopher is not dead at all: there just too many active Phlogs; the SDF's Phlogosphere alone, running Goophernicus on NetBSD, is a large community already
Ivan Shmakov
2018-09-01 10:30:24 UTC
Reply
Permalink
Raw Message
[Cross-posting to news:comp.infosystems.www.misc.]

[http://prgmr.com/blog/gopher/2018/08/23/gopher.html]

[...]
Why Use It?
If Gopher was supplanted by HTTP, why use it? As with many things,
the answer depends partly on your application. One of the selling
points for Gopher back in the day was that it was very light on
resources -- no media, just simple text menus. This makes it
attractive today for document-centric applications that don't want to
deal with breadth and complexity of the modern web.
Try Gopher if you like the feeling of tech nostalgia. Gopher is part
of a bygone age on the net. The simple fact that Veronica used a
database of every Gopher archive to search points to a time when the
Internet was small and personal, and it can bring that feeling back
in a small, carefully curated and distributed Gopher network. Retro
can be fun.
If you prize security, Gopher can be handy. It's purely text-based.
No JavaScript. None of the tools and add-ons that make the modern
net such a minefield.
I'm going to disagree with this general sentiment. First of all,
if you're setting up your own site, JavaScript is by no means a
necessity. For instance, my pages (http://am-1.org/~ivan/) are
ought to be fully readable without it.

Somewhat of an exception is that I use of MathJax for typesetting
of mathematical formulae with TeX-like quality. With Lynx, one
will read like:

\[ \begin {align} p (G) &\overset {\text {df}} = \left| V \right|,
& q (G) &\overset {\text {df}} = \left| E \right|.\\ \end {align}\]

I don't have much trouble understanding that, but I have to
admit that I've spent quite some time with LaTeX.

Another "exception" is http://am-1.org/~ivan/src/JL5desQ9.xhtml,
etc., yet the only reason these are implemented in JavaScript is
the sheer availability of the language. My goal was specifically
to create a program that can be run nearly anywhere. (I hope to
try Emscripten at some point so that I can write C code and
publish it alongside JavaScript "binaries" that can enjoy this
kind of portability.)

The second part of the equation is the server-side software. As
an example, I'd like to refer to http://skarnet.org/, and
specifically /cgi-bin/memstat.cgi, which reads:

How much memory is alyss using right now?

Kernel excluded, the amount of memory in use is: 73976 kB

[That is: less than 73 MiB!]

That may be way more than that of an old-time Gopher server, but
the host apparently runs a plethora of services (such as ESMTPS,
IMAP, DNS, etc.) in addition to the HTTP server proper. Refer to
http://skarnet.org/poweredby.html for details.

One specific solution that can be applied when maintaining a
lightweight Web resource is to limit one to "static" files as
much as possible. As shown by the Ikiwiki software, it's still
possible to implement a "dynamic" Web site that way.

(Unfortunately, access to prior revisions via Ikiwiki is not
implemented out-of-box using static files only. Contributing to
and commenting on the resource also places additional burden on
the server, but that's unavoidable.)

Finally, I'd like to point that the very same Lynx browser that's
suggested as a Gopher client can be used to read Web as well.
While, arguably, this doesn't make you safe from any possible
security vulnerability whatsoever, at least JavaScript-related
issues are off the metaphorical minefield mentioned above.

There're two obvious objections to my suggestion. The first is
that you, or some other person or group, will find Lynx highly
unfamiliar. To which I'm going to counter that, on one hand,
in the context of a Web vs. Gopher discussion, if Lynx feels
unfamiliar to someone when it comes to Web reading, wouldn't it
be any less unfamiliar for accessing Gopher resources? On the
other, you can familiarize yourself with it anytime you like.

A second objection is that "many sites" are going to be
inaccessible with Lynx. While this may very well be true when
absolute numbers are considered, I've found that very rarely I
find a resource that is both not accessible with Lynx (typically
due to unwarranted, IMO, use of JavaScript) and is of sufficient
interest for me to bother.

There're several possible ways to proceed from there. One is
to understand that it may be infeasible, in principle, for that
specific resource to be made accessible with Lynx. Think of
http://earth.nullschool.net/, for example. Somewhat similar may
be the case of any Web page used for entering your bank card
details, although I'm not sure I understand why.

Another, especially in the case of community Web resources, is
to find the maintainers' contacts and ask them for their reasons.
Perhaps they simply didn't think that someone may be interested
in reading their resource with Lynx and just went with some
JavaScript-heavy, out of the box solution.

The third one would be to check for some kind of a public API.
That won't necessarily make the site accessible with Lynx, but
perhaps some kind of lightweight (non-browser) interface could
either be available, or reasonably easy to implement. (As an
example, I've contributed to Wikimedia projects for several years
without ever leaving the comfortable confines of my Emacs.)

Unfortunately, I'm not aware of any Web authoring recommendations
that would help an interested party to develop lightweight,
Lynx-friendly and "document-centric" (unsure I do understand what
the author have meant by that) Web resources. But that means an
opportunity for someone interested in the subject, doesn't it?
Ultimately though, use Gopher because you can.
I wonder if one of these days I'll try HPT. Or Crashmail 2.

[...]
--
FSF associate member #7257 http://softwarefreedomday.org/ 15 September 2018
sehnsucht
1970-01-01 00:00:00 UTC
Reply
Permalink
Raw Message
Post by Ivan Shmakov
Finally, I'd like to point that the very same Lynx browser that's
suggested as a Gopher client can be used to read Web as well.
While, arguably, this doesn't make you safe from any possible
security vulnerability whatsoever, at least JavaScript-related
issues are off the metaphorical minefield mentioned above.
There're two obvious objections to my suggestion. The first is
that you, or some other person or group, will find Lynx highly
unfamiliar. To which I'm going to counter that, on one hand,
in the context of a Web vs. Gopher discussion, if Lynx feels
unfamiliar to someone when it comes to Web reading, wouldn't it
be any less unfamiliar for accessing Gopher resources? On the
other, you can familiarize yourself with it anytime you like.
I use the 'gopher' client version from quux.org (famous goher
server): http://gopher.quux.org:70/give-me-gopher/ and it works
well on NetBSD. Yes,it's still a text-mode browser after all, but
it's fast, intuitive, featured and easy to use (mans below) .
Able to browse gopher only (doesn't speak ftp, http, or nntp),
isn't biased by the security threat you were speaking
about

http://www.linuxcertif.com/man/1/gopher/
http://www.linuxcertif.com/man/5/gopherrc/
--
----Android NewsGroup Reader----
http://usenet.sinaapp.com/
Loading...