Discussion:
terminal only for two weeks
Add Reply
Retrograde
2024-11-25 13:34:25 UTC
Reply
Permalink
From the «text is good enough» department:
Title: Using (only) a Linux terminal for my personal computing in 2024
Author: Thom Holwerda
Date: Sun, 24 Nov 2024 22:13:32 +0000
Link: https://www.osnews.com/story/141194/using-only-a-linux-terminal-for-my-personal-computing-in-2024/


A month and a bit ago, I wondered if I could cope with a terminal-only
computer[1].
[…]

The only way to really find out was to give it a go.

My goal was to see what it was like to use a terminal-only computer for my
personal computing for two weeks, and more if I fancied it.
↫ Neil’s blog[2]

I tried to do this too, once.

Once.

Doing everything from the terminal just isn’t viable for me, mostly because I
didn’t grow up with it. Our family’s first computer ran MS-DOS (with a Windows
3.1 installation we never used), and I’m pretty sure the experience of using
MS-DOS as my first CLI ruined me for life. My mental model for computing didn’t
start forming properly until Windows 95 came out, and as such, computing is
inherently graphical for me, and no matter how many amazing CLI and TUI
applications are out there – and there are many, many amazing ones – my brain
just isn’t compatible with it.

There are a few tasks I prefer doing with the command line, like updating my
computers or editing system files using Nano, but for everything else I’m just
faster and more comfortable with a graphical user interface. This comes down to
not knowing most commands by heart, and often not even knowing the options and
flags for the most basic of commands, meaning even very basic operations that
people comfortable using the command line do without even thinking, take me
ages.

I’m glad any modern Linux distribution – I use Fedora KDE on all my computers –
offers both paths for almost anything you could do on your computer, and unless
I specifically opt to do so, I literally – literally literally – never have to
touch the command line.

Links:
[1]: https://neilzone.co.uk/2024/10/could-i-cope-with-a-terminal-only-computer/ (link)
[2]: https://neilzone.co.uk/2024/11/using-only-a-linux-terminal-for-my-personal-computing-in-2024/ (link)
D
2024-11-25 21:18:21 UTC
Reply
Permalink
Post by Retrograde
Title: Using (only) a Linux terminal for my personal computing in 2024
Author: Thom Holwerda
Date: Sun, 24 Nov 2024 22:13:32 +0000
Link: https://www.osnews.com/story/141194/using-only-a-linux-terminal-for-my-personal-computing-in-2024/
A month and a bit ago, I wondered if I could cope with a terminal-only
computer[1].
[
]
The only way to really find out was to give it a go.
My goal was to see what it was like to use a terminal-only computer for my
personal computing for two weeks, and more if I fancied it.
↫ Neil’s blog[2]
I tried to do this too, once.
Once.
Doing everything from the terminal just isn’t viable for me, mostly because I
didn’t grow up with it. Our family’s first computer ran MS-DOS (with a Windows
3.1 installation we never used), and I’m pretty sure the experience of using
MS-DOS as my first CLI ruined me for life. My mental model for computing didn’t
start forming properly until Windows 95 came out, and as such, computing is
inherently graphical for me, and no matter how many amazing CLI and TUI
applications are out there – and there are many, many amazing ones – my brain
just isn’t compatible with it.
There are a few tasks I prefer doing with the command line, like updating my
computers or editing system files using Nano, but for everything else I’m just
faster and more comfortable with a graphical user interface. This comes down to
not knowing most commands by heart, and often not even knowing the options and
flags for the most basic of commands, meaning even very basic operations that
people comfortable using the command line do without even thinking, take me
ages.
I’m glad any modern Linux distribution – I use Fedora KDE on all my computers –
offers both paths for almost anything you could do on your computer, and unless
I specifically opt to do so, I literally – literally literally – never have to
touch the command line.
[1]: https://neilzone.co.uk/2024/10/could-i-cope-with-a-terminal-only-computer/ (link)
[2]: https://neilzone.co.uk/2024/11/using-only-a-linux-terminal-for-my-personal-computing-in-2024/ (link)
Fascinating experiment. I would not be able to do it. I need a browser to
run my business, manage my finances etc. so terminal only, while nice,
would be very difficult without some serious programming and hacking
around problems.
Lawrence D'Oliveiro
2024-11-25 21:52:59 UTC
Reply
Permalink
This comes down to not knowing most commands by heart,
and often not even knowing the options and flags for the most basic of
commands ...
Don’t need to. Type “man «cmd»” to see all the details of the options
available for any external command. I do this all the time.
I’m glad any modern Linux distribution – I use Fedora KDE on all my
computers – offers both paths for almost anything you could do on your
computer, and unless I specifically opt to do so, I literally –
literally literally – never have to touch the command line.
Also, running a command line through a GUI terminal emulator lets you take
advantage of cut/copy/paste between windows, which is a feature not
available on a pure-command-line system.
Mike Spencer
2024-11-26 07:18:45 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Also, running a command line through a GUI terminal emulator lets you take
advantage of cut/copy/paste between windows, which is a feature not
available on a pure-command-line system.
The command line is like language. The GUI is like shopping.

Turns out, lots of my highly educated friends aren't all that good
with language. :-o

A windowing system is not in itself what most people mean by GUI and
is, yes, a huge leap forward over plain command-line terminals.

I do use a GUI browser and, occasionally, a GUI image editing device.
I can imagine that audio/video editing my work best in a full GUI.

But my default is a simple window manager (twm) on top of X with
numerous xterms open or iconified, some running things like dmesg -w,
one with root access etc.

I took one look, long ago, at Windows 95 and moved straight to Linux.
Took one look at KDE (shopping) and found twm.

FWIW,
--
Mike Spencer Nova Scotia, Canada
Lawrence D'Oliveiro
2024-11-26 21:28:23 UTC
Reply
Permalink
Post by Mike Spencer
Post by Lawrence D'Oliveiro
Also, running a command line through a GUI terminal emulator lets you
take advantage of cut/copy/paste between windows, which is a feature
not available on a pure-command-line system.
The command line is like language. The GUI is like shopping.
Did you learn in Comp Sci about the concept of “abstract machines”? To
program a computer, you start with the bare hardware, and add layers of
software on top of that, each creating a new “abstract machine” that is
easier to use for narrower and narrower classes of problems, albeit less
flexible than the machine layer below.

The command line is itself such an abstract machine, and you can create
additional layers on top of that by writing shell scripts.

GUIs, on the other hand, are not suited to having any additional layers
built on top of them. They are designed to be used by humans, and that’s
that. Attempts to automate GUI operations tend not to work very well.
Post by Mike Spencer
Took one look at KDE (shopping) and found twm.
KDE Konsole is probably the most versatile of all the GUI terminal
emulators.
yeti
2024-11-26 08:40:50 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Also, running a command line through a GUI terminal emulator lets you take
advantage of cut/copy/paste between windows, which is a feature not
available on a pure-command-line system.
I still can use Cut&Paste on Linux's "real VTs" but I'd prefer a
decorationless fullscreen XTerm over those if I would try to work
GUIfree for a while because of easier size switching, Sixel and TeK40xx
graphics.

Screen and Tmux would offer (keyboard driven) Cut&Paste.

There now may be framebuffer terminals with most of the features of
XTerm, but testing those still is crying for attention in my eternally
growing (™Dark Energy Inside!™) to do list. *sigh!*
--
1. Hitchhiker 5: (101) "You just come along with me and have a good
time. The Galaxy's a fun place. You'll need to have this fish in your
ear."
Lawrence D'Oliveiro
2024-11-26 21:24:41 UTC
Reply
Permalink
Post by yeti
Post by Lawrence D'Oliveiro
Also, running a command line through a GUI terminal emulator lets you
take advantage of cut/copy/paste between windows, which is a feature
not available on a pure-command-line system.
I still can use Cut&Paste on Linux's "real VTs" but I'd prefer a
decorationless fullscreen XTerm over those if I would try to work
GUIfree for a while because of easier size switching, Sixel and TeK40xx
graphics.
But then it becomes difficult to have more than one terminal session open
at once.

I typically have about two dozen.
candycanearter07
2024-11-30 01:20:04 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
This comes down to not knowing most commands by heart,
and often not even knowing the options and flags for the most basic of
commands ...
Don’t need to. Type “man «cmd»” to see all the details of the options
available for any external command. I do this all the time.
I’m glad any modern Linux distribution – I use Fedora KDE on all my
computers – offers both paths for almost anything you could do on your
computer, and unless I specifically opt to do so, I literally –
literally literally – never have to touch the command line.
Also, running a command line through a GUI terminal emulator lets you take
advantage of cut/copy/paste between windows, which is a feature not
available on a pure-command-line system.
You can technically emulate that with screen or a similar multiplexer.
--
user <candycane> is generated from /dev/urandom
yeti
2024-11-30 03:40:57 UTC
Reply
Permalink
Post by candycanearter07
You can technically emulate that with screen or a similar multiplexer.
Apropos similar: The funniest multiplexer I saw was Neercs.

<https://github.com/cacalabs/neercs>



Was it ever officially finished and released?
--
I do not bite, I just want to play.
candycanearter07
2024-12-01 20:40:03 UTC
Reply
Permalink
Post by yeti
Post by candycanearter07
You can technically emulate that with screen or a similar multiplexer.
Apropos similar: The funniest multiplexer I saw was Neercs.
<https://github.com/cacalabs/neercs>
http://youtu.be/7d33Pu2OW7k
http://youtu.be/sQr42LjaNCY
Was it ever officially finished and released?
Honestly, that looks super cool and it's a shame it doesn't seem like it
was finished.
--
user <candycane> is generated from /dev/urandom
Lawrence D'Oliveiro
2024-11-30 03:52:19 UTC
Reply
Permalink
Post by candycanearter07
Post by Lawrence D'Oliveiro
Also, running a command line through a GUI terminal emulator lets you
take advantage of cut/copy/paste between windows, which is a feature
not available on a pure-command-line system.
You can technically emulate that with screen or a similar multiplexer.
A GUI lets you do that between different apps, not just terminal
emulators, as well.
candycanearter07
2024-12-01 20:40:04 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by candycanearter07
Post by Lawrence D'Oliveiro
Also, running a command line through a GUI terminal emulator lets you
take advantage of cut/copy/paste between windows, which is a feature
not available on a pure-command-line system.
You can technically emulate that with screen or a similar multiplexer.
A GUI lets you do that between different apps, not just terminal
emulators, as well.
I'm sure you can set something up with xclip if you really need that.
--
user <candycane> is generated from /dev/urandom
Lawrence D'Oliveiro
2024-12-01 23:24:44 UTC
Reply
Permalink
Post by candycanearter07
Post by Lawrence D'Oliveiro
Post by candycanearter07
Post by Lawrence D'Oliveiro
Also, running a command line through a GUI terminal emulator lets you
take advantage of cut/copy/paste between windows, which is a feature
not available on a pure-command-line system.
You can technically emulate that with screen or a similar multiplexer.
A GUI lets you do that between different apps, not just terminal
emulators, as well.
I'm sure you can set something up with xclip if you really need that.
But xclip requires a GUI, does it not?
candycanearter07
2024-12-02 02:00:03 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by candycanearter07
Post by Lawrence D'Oliveiro
Post by candycanearter07
Post by Lawrence D'Oliveiro
Also, running a command line through a GUI terminal emulator lets you
take advantage of cut/copy/paste between windows, which is a feature
not available on a pure-command-line system.
You can technically emulate that with screen or a similar multiplexer.
A GUI lets you do that between different apps, not just terminal
emulators, as well.
I'm sure you can set something up with xclip if you really need that.
But xclip requires a GUI, does it not?
So does running GUI apps. For terminal apps, using a multiplexer
copy/paste should be fine.
--
user <candycane> is generated from /dev/urandom
Lawrence D'Oliveiro
2024-12-02 05:41:24 UTC
Reply
Permalink
Post by candycanearter07
Post by Lawrence D'Oliveiro
Post by candycanearter07
Post by Lawrence D'Oliveiro
Post by candycanearter07
Post by Lawrence D'Oliveiro
Also, running a command line through a GUI terminal emulator lets
you take advantage of cut/copy/paste between windows, which is a
feature not available on a pure-command-line system.
You can technically emulate that with screen or a similar
multiplexer.
A GUI lets you do that between different apps, not just terminal
emulators, as well.
I'm sure you can set something up with xclip if you really need that.
But xclip requires a GUI, does it not?
So does running GUI apps.
If you’re running a GUI, you might as well use full-function GUI cut/copy/
paste, which is more general than anything provided within a character-
based multiplexer, anyway.
John McCue
2024-11-26 03:13:32 UTC
Reply
Permalink
Post by Retrograde
Title: Using (only) a Linux terminal for my personal computing in 2024
Author: Thom Holwerda
Date: Sun, 24 Nov 2024 22:13:32 +0000
Link: https://www.osnews.com/story/141194/using-only-a-linux-terminal-for-my-personal-computing-in-2024/
A month and a bit ago,?I wondered if I could cope with a terminal-only
computer[1].
[?]
The only way to really find out was to give it a go.
I am glad you tried, sure it was a nice and very different
experience.

<snip>
Post by Retrograde
Doing everything from the terminal just isn't viable for me,
mostly because I didn't grow up with it.
Fair enough, but at least you tried to see what things were
like for us old people. But yes, big changes like this are
hard to deal with.

I started before DOS existed on minis and I remember when
GUIs became a thing. I had to be dragged kicking and
screaming into that environment :) Still I pretty much live
in Xterms and only need a GUI for browsing and html email.

<snip>

Nice post!
--
csh(1) - "An elegant shell, for a more... civilized age."
- Paraphrasing Star Wars
D
2024-11-26 09:22:22 UTC
Reply
Permalink
Post by John McCue
Post by Retrograde
Title: Using (only) a Linux terminal for my personal computing in 2024
Author: Thom Holwerda
Date: Sun, 24 Nov 2024 22:13:32 +0000
Link: https://www.osnews.com/story/141194/using-only-a-linux-terminal-for-my-personal-computing-in-2024/
A month and a bit ago,?I wondered if I could cope with a terminal-only
computer[1].
[?]
The only way to really find out was to give it a go.
I am glad you tried, sure it was a nice and very different
experience.
<snip>
Post by Retrograde
Doing everything from the terminal just isn't viable for me,
mostly because I didn't grow up with it.
Fair enough, but at least you tried to see what things were
like for us old people. But yes, big changes like this are
hard to deal with.
I started before DOS existed on minis and I remember when
GUIs became a thing. I had to be dragged kicking and
screaming into that environment :) Still I pretty much live
in Xterms and only need a GUI for browsing and html email.
Through the wonders of alpine, atleast you can do html email in the
terminal as well! =)

I use the gui for web browsing, reading pdf:s and libreoffice. The rest
sits in the terminal (email, programming/scripting, tinkering, reading
text files).

I have been thinking about moving the reading part of web browsing into
the terminal as well, but haven't found a browser I'm happy with. Modern
web sites tend to become too messed up when viewed in the terminal. Maybe
it would be possible to write a kind of "pre-processor" that formats web
sites with a text based browser in mind?
Post by John McCue
<snip>
Nice post!
yeti
2024-11-26 11:33:23 UTC
Reply
Permalink
Post by D
I have been thinking about moving the reading part of web browsing
into the terminal as well, but haven't found a browser I'm happy
with.
I use Elinks, Emacs/EWW and W3m, but none of them can replace the scary
fullfat browsers. They seem to just fit Smolweb stuff (FTP, Gemini,
Gopher and similar).
Post by D
Maybe it would be possible to write a kind of "pre-processor" that
formats web sites with a text based browser in mind?
Despite me finding this solution really scary, something like that
indeed exists:

<https://www.brow.sh/>
--
4. Hitchhiker 11:
(72) "Watch the road!'' she yelped.
(73) "Shit!"
D
2024-11-26 15:36:07 UTC
Reply
Permalink
Post by yeti
Post by D
I have been thinking about moving the reading part of web browsing
into the terminal as well, but haven't found a browser I'm happy
with.
I use Elinks, Emacs/EWW and W3m, but none of them can replace the scary
fullfat browsers. They seem to just fit Smolweb stuff (FTP, Gemini,
Gopher and similar).
True.
Post by yeti
Post by D
Maybe it would be possible to write a kind of "pre-processor" that
formats web sites with a text based browser in mind?
Despite me finding this solution really scary, something like that
<https://www.brow.sh/>
Ah yes... I've seen this before! I did drop it due to its dependency on
FF, but the concept is similar. My idea was to aggressively filter a web
page before passing it on to elinks or similar.

Perhaps rewriting it a bit in order to avoid the looooooong list of menu
options or links that always come up at the top of the page, before the
content of the page shows after a couple of page downs (this happens for
instance if I go to wikipedia).

Instead parsing it, and adding those links at the bottom, removing
javascript, and perhaps passing on only the text. Well, those are only
ideas. Maybe I'll try, maybe I won't. Time will tell! =)
Computer Nerd Kev
2024-11-26 21:52:52 UTC
Reply
Permalink
Post by D
Post by yeti
Post by D
I have been thinking about moving the reading part of web browsing
into the terminal as well, but haven't found a browser I'm happy
with.
I use Elinks, Emacs/EWW and W3m, but none of them can replace the scary
fullfat browsers. They seem to just fit Smolweb stuff (FTP, Gemini,
Gopher and similar).
True.
I like seeing useful images, so prefer Dillo and Links (the latter
does support display via the framebuffer so you can run it
graphically without X).
Post by D
Post by yeti
Post by D
Maybe it would be possible to write a kind of "pre-processor" that
formats web sites with a text based browser in mind?
Despite me finding this solution really scary, something like that
<https://www.brow.sh/>
Ah yes... I've seen this before! I did drop it due to its dependency on
FF, but the concept is similar. My idea was to aggressively filter a web
page before passing it on to elinks or similar.
Perhaps rewriting it a bit in order to avoid the looooooong list of menu
options or links that always come up at the top of the page, before the
content of the page shows after a couple of page downs (this happens for
instance if I go to wikipedia).
Lucky if it's just a couple of page-downs, I can easily be
hammering the button on some insane pages where 10% is the actual
content and 90% is menu links. Often it's quicker to press End
and work up from the bottom, but many websites have a few pages of
junk at the bottom too now, so you have to hunt for the little
sliver of content in the middle.
Post by D
Instead parsing it, and adding those links at the bottom, removing
javascript, and perhaps passing on only the text.
A similar approach is taken by frogfind.com, except rather than
parsing the links and putting them at the end, it detetes them,
which makes it impossible to navigate many websites. It does the
other things you mention, but the link rewriting would probably be
the hardest part to get right with a universal parser.

Site-specific front-ends are a simpler goal. This is a list of ones
that work in Dillo, and therefore without Javascript:
https://alex.envs.net/dillectory/

Of course then you have the problem of them breaking as soon as the
target site/API changes or blocks them.
--
__ __
#_ < |\| |< _#
D
2024-11-27 09:51:41 UTC
Reply
Permalink
Post by Computer Nerd Kev
Post by D
Post by yeti
Post by D
I have been thinking about moving the reading part of web browsing
into the terminal as well, but haven't found a browser I'm happy
with.
I use Elinks, Emacs/EWW and W3m, but none of them can replace the scary
fullfat browsers. They seem to just fit Smolweb stuff (FTP, Gemini,
Gopher and similar).
True.
I like seeing useful images, so prefer Dillo and Links (the latter
does support display via the framebuffer so you can run it
graphically without X).
For some reason, I never managed to get the framebuffer to work. Have no
idea why. =( I would like to get it to work though. Dillo was a good tip!
I did play with it for a bit, but then forgot about it. Maybe the reason
was a lack of tabs or buffers. I think links or maybe it was elinks, had a
way for me to replicate tabs or vi buffers in the browser. It was super
convenient!

Basically my ideal would be to move all my "reading" to a text based
browser, so that I would only have to keep work related stuff in the
massive GUI browser. All the other 60+ tabs, would live in the text
browser where I would reference them when needed.
Post by Computer Nerd Kev
Post by D
Post by yeti
Post by D
Maybe it would be possible to write a kind of "pre-processor" that
formats web sites with a text based browser in mind?
Despite me finding this solution really scary, something like that
<https://www.brow.sh/>
Ah yes... I've seen this before! I did drop it due to its dependency on
FF, but the concept is similar. My idea was to aggressively filter a web
page before passing it on to elinks or similar.
Perhaps rewriting it a bit in order to avoid the looooooong list of menu
options or links that always come up at the top of the page, before the
content of the page shows after a couple of page downs (this happens for
instance if I go to wikipedia).
Lucky if it's just a couple of page-downs, I can easily be
hammering the button on some insane pages where 10% is the actual
content and 90% is menu links. Often it's quicker to press End
and work up from the bottom, but many websites have a few pages of
junk at the bottom too now, so you have to hunt for the little
sliver of content in the middle.
I know... as a perfectionist this does not go down well with me. ;)
Post by Computer Nerd Kev
Post by D
Instead parsing it, and adding those links at the bottom, removing
javascript, and perhaps passing on only the text.
A similar approach is taken by frogfind.com, except rather than
parsing the links and putting them at the end, it detetes them,
which makes it impossible to navigate many websites. It does the
other things you mention, but the link rewriting would probably be
the hardest part to get right with a universal parser.
Did not know about frogfind! This could be a great start to improve the
readability! In my home brew rss2email script, I automatically create
archive.is links, so that when I want to read articles behind paywalls,
archive.is is already built in.

I imagine that I could whip up something similar, running page through
http://frogfind.com/read.php?a=xyz... !
Post by Computer Nerd Kev
Site-specific front-ends are a simpler goal. This is a list of ones
https://alex.envs.net/dillectory/
Of course then you have the problem of them breaking as soon as the
target site/API changes or blocks them.
This is the truth!
Computer Nerd Kev
2024-11-27 20:44:48 UTC
Reply
Permalink
Post by D
Post by Computer Nerd Kev
Post by yeti
Post by D
I have been thinking about moving the reading part of web browsing
into the terminal as well, but haven't found a browser I'm happy
with.
I use Elinks, Emacs/EWW and W3m, but none of them can replace the scary
fullfat browsers. They seem to just fit Smolweb stuff (FTP, Gemini,
Gopher and similar).
True.
I like seeing useful images, so prefer Dillo and Links (the latter
does support display via the framebuffer so you can run it
graphically without X).
For some reason, I never managed to get the framebuffer to work. Have no
idea why. =( I would like to get it to work though.
I guess the framebuffer is working for the console, otherwise it
will probably be a low-res BIOS character display like in DOS. So
either a permissions problem or do you know that you need to start
Links with the "-g" option?
Post by D
Dillo was a good tip!
I did play with it for a bit, but then forgot about it. Maybe the reason
was a lack of tabs or buffers. I think links or maybe it was elinks, had a
way for me to replicate tabs or vi buffers in the browser. It was super
convenient!
Links doesn't do tabs, eLinks might but I haven't used it much.
Dillo has tabs, but isn't great for managing huge numbers of them
(although I avoid trying to do that anywhere).
--
__ __
#_ < |\| |< _#
yeti
2024-11-28 05:12:00 UTC
Reply
Permalink
Post by Computer Nerd Kev
Links doesn't do tabs, eLinks might
Elinks does.


... but now for something completely different:

Have you seen Twin?

<https://github.com/cosmos72/twin>
--
1. Hitchhiker 5: (101) "You just come along with me and have a good
time. The Galaxy's a fun place. You'll need to have this fish in your
ear."
D
2024-11-28 09:52:26 UTC
Reply
Permalink
Post by Computer Nerd Kev
Post by D
Post by Computer Nerd Kev
Post by yeti
Post by D
I have been thinking about moving the reading part of web browsing
into the terminal as well, but haven't found a browser I'm happy
with.
I use Elinks, Emacs/EWW and W3m, but none of them can replace the scary
fullfat browsers. They seem to just fit Smolweb stuff (FTP, Gemini,
Gopher and similar).
True.
I like seeing useful images, so prefer Dillo and Links (the latter
does support display via the framebuffer so you can run it
graphically without X).
For some reason, I never managed to get the framebuffer to work. Have no
idea why. =( I would like to get it to work though.
I guess the framebuffer is working for the console, otherwise it
will probably be a low-res BIOS character display like in DOS. So
either a permissions problem or do you know that you need to start
Links with the "-g" option?
Ahh... ok, that might explain it. If it is console only, then it might not
work in my terminal emulator, and -g just opens a window in X.

I would have liked for it to shows images in the terminal, but maybe I
need to find another terminal emulator for that to work? I think I use the
default one that comes with xfce.
Post by Computer Nerd Kev
Post by D
Dillo was a good tip!
I did play with it for a bit, but then forgot about it. Maybe the reason
was a lack of tabs or buffers. I think links or maybe it was elinks, had a
way for me to replicate tabs or vi buffers in the browser. It was super
convenient!
Links doesn't do tabs, eLinks might but I haven't used it much.
Dillo has tabs, but isn't great for managing huge numbers of them
(although I avoid trying to do that anywhere).
Hmm, I should revisit that. I did manage to hack together something
similar to buffers, but don't remember at the moment what I did exactly.
Computer Nerd Kev
2024-11-28 20:17:21 UTC
Reply
Permalink
Post by D
Post by Computer Nerd Kev
Post by D
Post by Computer Nerd Kev
Post by yeti
I use Elinks, Emacs/EWW and W3m, but none of them can replace the scary
fullfat browsers. They seem to just fit Smolweb stuff (FTP, Gemini,
Gopher and similar).
True.
I like seeing useful images, so prefer Dillo and Links (the latter
does support display via the framebuffer so you can run it
graphically without X).
For some reason, I never managed to get the framebuffer to work. Have no
idea why. =( I would like to get it to work though.
I guess the framebuffer is working for the console, otherwise it
will probably be a low-res BIOS character display like in DOS. So
either a permissions problem or do you know that you need to start
Links with the "-g" option?
Ahh... ok, that might explain it. If it is console only, then it might not
work in my terminal emulator, and -g just opens a window in X.
Certainly, in X it'll always be in a separate window.
Post by D
I would have liked for it to shows images in the terminal, but maybe I
need to find another terminal emulator for that to work? I think I use the
default one that comes with xfce.
W3m displays images in XTerm and other terminal emulators, so that
might be what you want for a browser. I'm not sure if there's a
list of terminal emulators that support image display from it.
This page mentions that some require changes to the configuration:
https://wiki.archlinux.org/title/W3m
--
__ __
#_ < |\| |< _#
D
2024-11-28 21:05:14 UTC
Reply
Permalink
Post by Computer Nerd Kev
Post by D
Post by Computer Nerd Kev
Post by D
Post by Computer Nerd Kev
Post by yeti
I use Elinks, Emacs/EWW and W3m, but none of them can replace the scary
fullfat browsers. They seem to just fit Smolweb stuff (FTP, Gemini,
Gopher and similar).
True.
I like seeing useful images, so prefer Dillo and Links (the latter
does support display via the framebuffer so you can run it
graphically without X).
For some reason, I never managed to get the framebuffer to work. Have no
idea why. =( I would like to get it to work though.
I guess the framebuffer is working for the console, otherwise it
will probably be a low-res BIOS character display like in DOS. So
either a permissions problem or do you know that you need to start
Links with the "-g" option?
Ahh... ok, that might explain it. If it is console only, then it might not
work in my terminal emulator, and -g just opens a window in X.
Certainly, in X it'll always be in a separate window.
Post by D
I would have liked for it to shows images in the terminal, but maybe I
need to find another terminal emulator for that to work? I think I use the
default one that comes with xfce.
W3m displays images in XTerm and other terminal emulators, so that
might be what you want for a browser. I'm not sure if there's a
list of terminal emulators that support image display from it.
https://wiki.archlinux.org/title/W3m
I did go back to play with elinks today, and it does seem like the text
based browser that gets absolutely closest to what I need with the ability
to auto save sessions.

I think that together wish frogfind.com I have found my temporary
solution for the terminal! It is also trivial to migrate my open "reading
tabs" from firefox to elinks by just doing a save all open tabs, and then
massaging the exported bookmarks file a bit and then just open all of the
sites from the command line. =)
yeti
2024-11-29 01:37:10 UTC
Reply
Permalink
Post by Computer Nerd Kev
W3m displays images in XTerm and other terminal emulators, so that
might be what you want for a browser. I'm not sure if there's a
list of terminal emulators that support image display from it.
https://wiki.archlinux.org/title/W3m
I think W3M seems to put another window layer atop the terminal to
display images. It works, but my main use case for W3M is as man page
viewer W3MMAN (aliased to man), so I don't care much for it's image
capabilities.

Elinks has a `./configure` option to enable Sixels, which I did, and I
see the generated binary being linked to `libsixel`, found the run-time
option to enable Sixel graphics, but I never see any images displayed.

<https://github.com/rkd77/elinks>

If someone succeeds with this, please ping me.
--
Die Partei | Martin Sonneborn | Die Partei
Die Partei | Gespräch am Küchentisch, Teil II | Die Partei
Die Partei |
| Die Partei
D
2024-11-29 09:38:06 UTC
Reply
Permalink
Post by yeti
Post by Computer Nerd Kev
W3m displays images in XTerm and other terminal emulators, so that
might be what you want for a browser. I'm not sure if there's a
list of terminal emulators that support image display from it.
https://wiki.archlinux.org/title/W3m
I think W3M seems to put another window layer atop the terminal to
display images. It works, but my main use case for W3M is as man page
viewer W3MMAN (aliased to man), so I don't care much for it's image
capabilities.
Elinks has a `./configure` option to enable Sixels, which I did, and I
see the generated binary being linked to `libsixel`, found the run-time
option to enable Sixel graphics, but I never see any images displayed.
<https://github.com/rkd77/elinks>
If someone succeeds with this, please ping me.
Thank you for mentioning it. I will have a look!
D
2024-11-29 21:39:12 UTC
Reply
Permalink
Post by D
Post by yeti
Post by Computer Nerd Kev
W3m displays images in XTerm and other terminal emulators, so that
might be what you want for a browser. I'm not sure if there's a
list of terminal emulators that support image display from it.
https://wiki.archlinux.org/title/W3m
I think W3M seems to put another window layer atop the terminal to
display images. It works, but my main use case for W3M is as man page
viewer W3MMAN (aliased to man), so I don't care much for it's image
capabilities.
Elinks has a `./configure` option to enable Sixels, which I did, and I
see the generated binary being linked to `libsixel`, found the run-time
option to enable Sixel graphics, but I never see any images displayed.
<https://github.com/rkd77/elinks>
If someone succeeds with this, please ping me.
Thank you for mentioning it. I will have a look!
I tried elinks with frogfind.com and I discovered that the best way to
kind of replicate buffers are to start elinks with all the sites I have on
my reading list (elinks $(cat links.txt)). In the links.txt I have
prefixed all my sites with frogfind.com.

I then discovered that they all entered the global history file, and in
that file I can search among the sites.

So all sites are opened in invisible tabs, and I can search for them in
either the globalhistory, or I can make sure they are all saved as
bookmarks, and drop the tabs altogether.

Frogfind makes it fairly palatable!
Mike Spencer
2024-11-26 21:57:53 UTC
Reply
Permalink
Post by D
Post by yeti
<https://www.brow.sh/>
Ah yes... I've seen this before! I did drop it due to its dependency on
FF, but the concept is similar. My idea was to aggressively filter a web
page before passing it on to elinks or similar.
Perhaps rewriting it a bit in order to avoid the looooooong list of menu
options or links that always come up at the top of the page, before the
content of the page shows after a couple of page downs (this happens for
instance if I go to wikipedia).
Instead parsing it, and adding those links at the bottom, removing
javascript, and perhaps passing on only the text. Well, those are only
ideas. Maybe I'll try, maybe I won't. Time will tell! =)
I've done this for a few individual sites that I visit frequently.

+ A link to that site resides on my browser's "home" page.

+ That home page is a file in ~/html/ on localhost.

+ The link is actually to a target-specific cgi-bin Perl script on
localhost where Apache is running, restricted to requests from
localhost.

+ The script takes the URL sent from the home page, rewrites it for
the routable net, sends it to the target using wget and reads all
of the returned data into a variable.

+ Using Perl's regular expressions, stuff identified (at time of
writing the script) as unwanted is elided -- js, style, svg,
noscript etc. URLs self-referencing the target are rewritten to
to be sent through the cgi-bin script.

+ Other tweaks peculiar to the specific target...

+ Result is handed back to the browser preceded by minimal HTTP
headers.

So far, works like a charm. Always the potential that a target host
will change their format significantly. That has happened a couple of
times, requiring fetching an unadorned copy of the target's page,
tedious reading/parsing and edit to the script.

This obviously doesn't work for those sites that initially send a
dummy all-js page to verify that you have js enabled and send you a
condescending reproof if you don't. Other server-side dominance games
a potential challenge or a stone wall.

Writing a generalized version, capable of dealing with pages from
random/arbitrary sites is a notion perhaps worth pursuing but clearly
more of a challenge than site-specific scripts. RSN, round TUIT etc.
--
Mike Spencer Nova Scotia, Canada
D
2024-11-27 09:54:50 UTC
Reply
Permalink
Post by Mike Spencer
Post by D
Post by yeti
<https://www.brow.sh/>
Ah yes... I've seen this before! I did drop it due to its dependency on
FF, but the concept is similar. My idea was to aggressively filter a web
page before passing it on to elinks or similar.
Perhaps rewriting it a bit in order to avoid the looooooong list of menu
options or links that always come up at the top of the page, before the
content of the page shows after a couple of page downs (this happens for
instance if I go to wikipedia).
Instead parsing it, and adding those links at the bottom, removing
javascript, and perhaps passing on only the text. Well, those are only
ideas. Maybe I'll try, maybe I won't. Time will tell! =)
I've done this for a few individual sites that I visit frequently.
+ A link to that site resides on my browser's "home" page.
+ That home page is a file in ~/html/ on localhost.
+ The link is actually to a target-specific cgi-bin Perl script on
localhost where Apache is running, restricted to requests from
localhost.
+ The script takes the URL sent from the home page, rewrites it for
the routable net, sends it to the target using wget and reads all
of the returned data into a variable.
+ Using Perl's regular expressions, stuff identified (at time of
writing the script) as unwanted is elided -- js, style, svg,
noscript etc. URLs self-referencing the target are rewritten to
to be sent through the cgi-bin script.
+ Other tweaks peculiar to the specific target...
+ Result is handed back to the browser preceded by minimal HTTP
headers.
So far, works like a charm. Always the potential that a target host
will change their format significantly. That has happened a couple of
times, requiring fetching an unadorned copy of the target's page,
tedious reading/parsing and edit to the script.
This obviously doesn't work for those sites that initially send a
dummy all-js page to verify that you have js enabled and send you a
condescending reproof if you don't. Other server-side dominance games
a potential challenge or a stone wall.
Writing a generalized version, capable of dealing with pages from
random/arbitrary sites is a notion perhaps worth pursuing but clearly
more of a challenge than site-specific scripts. RSN, round TUIT etc.
Brilliant! You are a poet Mike!

Frogfind.com was a great start! I would love to have some kind of crowd
sourced html5->html1 - javascript - garbage script.

I also wondered if another approach might just be to take the top 500
sites and base it on that? Or even looking through my own history, take
the top 100.

Due to the bad development of the net, it seems like a greater and greater
part of our browsing takes place on ever fewer numbers of sites.
Mike Spencer
2024-11-28 05:41:56 UTC
Reply
Permalink
Post by D
Brilliant! You are a poet Mike!
I'm doubtful that poetry can be done in Perl. Maybe free verse in
Lisp.
Post by D
Frogfind.com was a great start! I would love to have some kind of crowd
sourced html5->html1 - javascript - garbage script.
Do note that Frogfind delivers URLs that send your click back to
Frogfind to be proxied. I assume that's how you get de-enshitified
pages in response to clicking a link returned from a search.

Here's a curiosity:

Google also sends all of your clicks on search results back through
Google. I assume y'all knew that.

If you search for (say):

leon "the professional"

you get:

https://www.google.com/url?q=https://en.wikipedia.org/wiki/L%25C3%25A9on:_The_Professional&sa=U&ved=2ahUKEwi [snip tracking hentracks/data]

Note that the "real" URL which Google proposes to proxy for you
contains non-ASCII characters:

en.wikipedia.org/wiki/L%25C3%25A9on:_The_Professional

Wikipedia does *not* *have* a page connected to that URL! But if you
click the link and send it back through Google, you reach the right
Wikipedia page that *does* exist:

en.wikipedia.org/wiki/Leon:_The_Professional

AFAICT, when spidering the net, Google finds the page that *does*
exist, modifies it according to (opaque, unknown) rules of orthography
and delivers that to you. When you send that link back through
Google, Google silently reverts the imposed orthographic "correction"
so that the link goes to an existing page.

Isn't the weird?

Try it. Copy the "real" URL from such a Google response, eliding
everything before (and including) "?q=" and after (and including) the
first "&", paste it into your browser. Wikipedia will politely tell
you that no such page is available and offer search suggestions.
Revert the non-ASCII "e with a diacritical mark" to 'e' (mutatis
mutandem for similar Google "hits") and it will work.
Post by D
I also wondered if another approach might just be to take the top 500
sites and base it on that? Or even looking through my own history, take
the top 100.
Now there's a project suitable for AI: train the NN to treat a response
containing stuff you don't want ever to see as a failure. Grovel
repetitively through terabytes of HTML and finally come up with a
generalized filter solution.
Post by D
Due to the bad development of the net, it seems like a greater and
greater part of our browsing takes place on ever fewer numbers of
sites.
--
Mike Spencer Nova Scotia, Canada
Lawrence D'Oliveiro
2024-11-28 06:42:10 UTC
Reply
Permalink
Post by Mike Spencer
AFAICT, when spidering the net, Google finds the page that *does*
exist, modifies it according to (opaque, unknown) rules of orthography
and delivers that to you.
It adds an entirely unnecessary extra level of URL quoting.

Trying your example through a redirection-removal script I hacked
together:

***@theon:unredirect> ./unredirect 'https://www.google.com/url?q=https://en.wikipedia.org/wiki/L%25C3%25A9on:_The_Professional&sa=U&ved=2ahUKEwi'
https://en.wikipedia.org/wiki/L%25C3%25A9on:_The_Professional

Wrong.

***@theon:unredirect> ./unredirect --unquote 'https://www.google.com/url?q=https://en.wikipedia.org/wiki/L%25C3%25A9on:_The_Professional&sa=U&ved=2ahUKEwi'
https://en.wikipedia.org/wiki/L%C3%A9on:_The_Professional

Right.
D
2024-11-28 09:56:54 UTC
Reply
Permalink
Post by Mike Spencer
Post by D
Brilliant! You are a poet Mike!
I'm doubtful that poetry can be done in Perl. Maybe free verse in
Lisp.
Is it true that Lisp is the secret name of god?
Post by Mike Spencer
Post by D
Frogfind.com was a great start! I would love to have some kind of crowd
sourced html5->html1 - javascript - garbage script.
Do note that Frogfind delivers URLs that send your click back to
Frogfind to be proxied. I assume that's how you get de-enshitified
pages in response to clicking a link returned from a search.
Yes, I noted that.
Post by Mike Spencer
Google also sends all of your clicks on search results back through
Google. I assume y'all knew that.
Haven't used google in a long time, I use ddg.gg or startpage.com instead.
As far as I can see based on a quick glance, they do no rewrites of the
urls.
Post by Mike Spencer
Isn't the weird?
I imagine it is done to record it and to help build your profile somehow, which
can then be sold to advertisers?
Post by Mike Spencer
Post by D
I also wondered if another approach might just be to take the top 500
sites and base it on that? Or even looking through my own history, take
the top 100.
Now there's a project suitable for AI: train the NN to treat a response
containing stuff you don't want ever to see as a failure. Grovel
repetitively through terabytes of HTML and finally come up with a
generalized filter solution.
Maybe. I would be afraid of it becoming conscious and developing a will of its
own! ;)
Post by Mike Spencer
Post by D
Due to the bad development of the net, it seems like a greater and
greater part of our browsing takes place on ever fewer numbers of
sites.
Ivan Shmakov
2024-12-20 18:42:28 UTC
Reply
Permalink
[Cross-posting to news:comp.infosystems.www.misc just in case,
but setting Followup-To: comp.misc still. Feel free to disregard,
though; if anything, I'll be monitoring both groups for some
time for responses.]
Post by Mike Spencer
Google also sends all of your clicks on search results back through
Google. I assume y'all knew that.
leon "the professional"
https://www.google.com/url
?q=https://en.wikipedia.org/wiki/L%25C3%25A9on:_The_Professional
&sa=U&ved=2ahUKEwi [snip tracking hentracks/data]
Note that the "real" URL which Google proposes to proxy for you
en.wikipedia.org/wiki/L%25C3%25A9on:_The_Professional
Wikipedia does *not* *have* a page connected to that URL! But if you
click the link and send it back through Google, you reach the right
en.wikipedia.org/wiki/Leon:_The_Professional
And this page clearly states (search for "Redirected from" there)
that it was reached via an alias. If you follow the "Article"
link from there, it'll lead you to .../L%C3%A9on:_The_Professional
instead, which is the proper URI for that Wikipedia article.

Think of it. Suppose that Google has to return something like
http://example.com/?o=p&q=http://example.net/ as one of the
results. Can you just put it after google.com/url?q= directly
without ambiguity? You'd get:

http://google.com/url?q=http://example.com/?o=p&q=http://example.net/&...
^1 ^2

Normally, the URI would start after ?q= and go until the first ^1
occurence of &, but in this case, it'd be actually the second ^2
that terminates the intended URI. Naturally, Google avoids it
by %-encoding the ?s and &s, like:

http://google.com/url?q=http://example.com/%3fo=p%26q=http://example.net/&...

By the same merit, they need to escape %s themselves, should
the original URI contain any, so e. g. http://example.com/%d1%8a
becomes .../url?q=http://example.com/%25d1%258a&... .

Of course, Google didn't invent any of this: unless I be mistaken,
that's how HTML <form method="get" />s have worked from the get-go.
And you /do/ need something like Hello%3f%20%20Anybody%20home%3f
to put it after /guestbook?comment=.

FWIW, I tend to use the following Perl bits for %-encoding and
decoding, respectively:

s {[^0-9A-Za-z/_.-]}{${ \sprintf ("%%%02x", ord ($&)); }}g;
s {%([0-9a-fA-F]{2})}{${ \chr (hex ($1)); }}g;
Post by Mike Spencer
AFAICT, when spidering the net, Google finds the page that *does*
exist, modifies it according to (opaque, unknown) rules of orthography
and delivers that to you. When you send that link back through
Google, Google silently reverts the imposed orthographic "correction"
so that the link goes to an existing page.
Isn't the weird?
There's this bit near the end of the .../Leon:_The_Professional
(line split for readability):

<script type="application/ld+json">{
"@context":"https:\/\/schema.org",
"@type":"Article",
"name":"L\u00e9on: The Professional",
"url":"https:\/\/en.wikipedia.org\/wiki\/L%C3%A9on:_The_Professional",
[...]

I'm pretty certain that Google /does/ parse JSON-LD like in the
above, so I can only presume that when it finds a Web document
that points to a different "url": in this way, it (sometimes?)
uses the latter in preference to the original URI.

I've been thinking of adopting JSON-LD for my own Web pages
(http://am-1.org/~ivan/ , http://users.am-1.org/~ivan/ , etc.),
but so far have only used (arguably better readable)
http://microformats.org/wiki/microformats2 (that I hope search
engines will at some point add support for.) Consider, e. g.:

http://pin13.net/mf2/?url=http://am-1.org/~ivan/qinp-2024/112.l-system.en.xhtml

Note that ?url= above needs the exact same %-treatment as does
Google's /url?q=. Naturally, the HTML form at http://pin13.net/mf2/
will do it for you. (Or, rather: instruct your Web user agent
to do so.)
Andy Burns
2024-12-20 19:03:16 UTC
Reply
Permalink
Post by Ivan Shmakov
[Cross-posting to news:comp.infosystems.www.misc just in case,
but setting Followup-To: comp.misc still. Feel free to disregard,
though; if anything, I'll be monitoring both groups for some
time for responses.]
Post by Mike Spencer
Google also sends all of your clicks on search results back through
Google.
Probably because they wouldn't trust a browser to honour the ping
attribute of an anchor tag, that was designed for tracking?

<https://developer.mozilla.org/en-US/docs/Web/API/HTMLAnchorElement/ping>
Mike Spencer
2024-12-22 05:39:23 UTC
Reply
Permalink
[ Top-posting because this is brief and adds no new interspersed text...]

Thank you very much, Ivan, for redirecting ;-) my lagging attention to
the %25 hex encoded chars prefixed to the already hex encoded chars
and Google's pages/methods for dealing with them. Your detailed reply
is much appreciated.

I'll read your comments more carefully and see if I can't tweak my
Perl script, the behavior of which led to my original comments on
this, to Do The Right Thing.

[ Previous exchange left unaltered for the record.]
Post by Ivan Shmakov
[Cross-posting to news:comp.infosystems.www.misc just in case,
but setting Followup-To: comp.misc still. Feel free to disregard,
though; if anything, I'll be monitoring both groups for some
time for responses.]
Post by Mike Spencer
Google also sends all of your clicks on search results back through
Google. I assume y'all knew that.
leon "the professional"
https://www.google.com/url
?q=https://en.wikipedia.org/wiki/L%25C3%25A9on:_The_Professional
&sa=U&ved=2ahUKEwi [snip tracking hentracks/data]
Note that the "real" URL which Google proposes to proxy for you
en.wikipedia.org/wiki/L%25C3%25A9on:_The_Professional
Wikipedia does *not* *have* a page connected to that URL! But if you
click the link and send it back through Google, you reach the right
en.wikipedia.org/wiki/Leon:_The_Professional
And this page clearly states (search for "Redirected from" there)
that it was reached via an alias. If you follow the "Article"
link from there, it'll lead you to .../L%C3%A9on:_The_Professional
instead, which is the proper URI for that Wikipedia article.
Think of it. Suppose that Google has to return something like
http://example.com/?o=p&q=http://example.net/ as one of the
results. Can you just put it after google.com/url?q= directly
http://google.com/url?q=http://example.com/?o=p&q=http://example.net/&...
^1 ^2
Normally, the URI would start after ?q= and go until the first ^1
occurence of &, but in this case, it'd be actually the second ^2
that terminates the intended URI. Naturally, Google avoids it
http://google.com/url?q=http://example.com/%3fo=p%26q=http://example.net/&...
By the same merit, they need to escape %s themselves, should
the original URI contain any, so e. g. http://example.com/%d1%8a
becomes .../url?q=http://example.com/%25d1%258a&... .
Of course, Google didn't invent any of this: unless I be mistaken,
that's how HTML <form method="get" />s have worked from the get-go.
And you /do/ need something like Hello%3f%20%20Anybody%20home%3f
to put it after /guestbook?comment=.
FWIW, I tend to use the following Perl bits for %-encoding and
s {[^0-9A-Za-z/_.-]}{${ \sprintf ("%%%02x", ord ($&)); }}g;
s {%([0-9a-fA-F]{2})}{${ \chr (hex ($1)); }}g;
Post by Mike Spencer
AFAICT, when spidering the net, Google finds the page that *does*
exist, modifies it according to (opaque, unknown) rules of orthography
and delivers that to you. When you send that link back through
Google, Google silently reverts the imposed orthographic "correction"
so that the link goes to an existing page.
Isn't the weird?
There's this bit near the end of the .../Leon:_The_Professional
<script type="application/ld+json">{
"name":"L\u00e9on: The Professional",
"url":"https:\/\/en.wikipedia.org\/wiki\/L%C3%A9on:_The_Professional",
[...]
I'm pretty certain that Google /does/ parse JSON-LD like in the
above, so I can only presume that when it finds a Web document
that points to a different "url": in this way, it (sometimes?)
uses the latter in preference to the original URI.
I've been thinking of adopting JSON-LD for my own Web pages
(http://am-1.org/~ivan/ , http://users.am-1.org/~ivan/ , etc.),
but so far have only used (arguably better readable)
http://microformats.org/wiki/microformats2 (that I hope search
http://pin13.net/mf2/?url=http://am-1.org/~ivan/qinp-2024/112.l-system.en.xhtml
Note that ?url= above needs the exact same %-treatment as does
Google's /url?q=. Naturally, the HTML form at http://pin13.net/mf2/
will do it for you. (Or, rather: instruct your Web user agent
to do so.)
--
Mike Spencer Nova Scotia, Canada
Oregonian Haruspex
2024-12-04 06:11:40 UTC
Reply
Permalink
EMacs EWW seems to work with a large number of sites these days. I try to
do everything in eMacs. Of course for some stuff like shopping and banking
a modern aka bloated browser is necessary. But eMacs is also TUI, not
strictly a terminal program.

There is something serene about text as your interface. If I could get
Amazon, eBay, and my bank to work properly in EWW I wouldn’t even launch a
browser, ever.
Lawrence D'Oliveiro
2024-12-04 06:42:40 UTC
Reply
Permalink
But eMacs is also TUI, not strictly a terminal program.
It can display graphics. It has long been able to run under X11. I
currently use a GTK build that works under Wayland.
candycanearter07
2024-12-04 14:30:03 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
But eMacs is also TUI, not strictly a terminal program.
It can display graphics. It has long been able to run under X11. I
currently use a GTK build that works under Wayland.
But does it support JS?
--
user <candycane> is generated from /dev/urandom
Lawrence D'Oliveiro
2024-12-05 01:46:43 UTC
Reply
Permalink
Post by candycanearter07
Post by Lawrence D'Oliveiro
But eMacs is also TUI, not strictly a terminal program.
It can display graphics. It has long been able to run under X11. I
currently use a GTK build that works under Wayland.
But does it support JS?
This being Emacs, the answer would be “very likely”.

But ... relevance being?
Computer Nerd Kev
2024-12-07 21:52:33 UTC
Reply
Permalink
Post by candycanearter07
Post by Lawrence D'Oliveiro
But eMacs is also TUI, not strictly a terminal program.
It can display graphics. It has long been able to run under X11. I
currently use a GTK build that works under Wayland.
But does it support JS?
This being Emacs, the answer would be "very likely".
But ... relevance being?
Post by candycanearter07
Post by Lawrence D'Oliveiro
If I could get Amazon, eBay, and my bank to work properly in
EWW I wouldn't even launch a browser, ever.
I don't know about Emacs, but for TUI browsers with Javascript
support ELinks is one that I'm aware of. However like the
experimental JS support in Netsurf it doesn't seem to be advanced
enough to be useful (although unlike Netsurf, ELinks uses Mozilla's
SpiderMonkey JS engine, so I'm not exactly sure what makes it so
difficult to get right).
--
__ __
#_ < |\| |< _#
root
2024-12-08 14:11:04 UTC
Reply
Permalink
Post by Computer Nerd Kev
I don't know about Emacs, but for TUI browsers with Javascript
support ELinks is one that I'm aware of. However like the
experimental JS support in Netsurf it doesn't seem to be advanced
enough to be useful (although unlike Netsurf, ELinks uses Mozilla's
SpiderMonkey JS engine, so I'm not exactly sure what makes it so
difficult to get right).
I regard ELinks as worthless. At best, I hope it is a work in
progress. I haven't tried Netsurf, but I have tried implementing,
via jsdom, specific fetch routines for different sites of interest.
I have found that even sites that contain json data do not provide
consistent (across sites) methods of fetching the data. It is
worse when the data are not as organized as json data, but it is
distributed in unique ways for the specific site.
Bozo User
2025-01-12 23:01:23 UTC
Reply
Permalink
Post by root
Post by Computer Nerd Kev
I don't know about Emacs, but for TUI browsers with Javascript
support ELinks is one that I'm aware of. However like the
experimental JS support in Netsurf it doesn't seem to be advanced
enough to be useful (although unlike Netsurf, ELinks uses Mozilla's
SpiderMonkey JS engine, so I'm not exactly sure what makes it so
difficult to get right).
I regard ELinks as worthless. At best, I hope it is a work in
progress. I haven't tried Netsurf, but I have tried implementing,
via jsdom, specific fetch routines for different sites of interest.
I have found that even sites that contain json data do not provide
consistent (across sites) methods of fetching the data. It is
worse when the data are not as organized as json data, but it is
distributed in unique ways for the specific site.
Bozo User
2025-01-12 23:01:23 UTC
Reply
Permalink
Post by root
Post by Computer Nerd Kev
I don't know about Emacs, but for TUI browsers with Javascript
support ELinks is one that I'm aware of. However like the
experimental JS support in Netsurf it doesn't seem to be advanced
enough to be useful (although unlike Netsurf, ELinks uses Mozilla's
SpiderMonkey JS engine, so I'm not exactly sure what makes it so
difficult to get right).
I regard ELinks as worthless. At best, I hope it is a work in
progress. I haven't tried Netsurf, but I have tried implementing,
via jsdom, specific fetch routines for different sites of interest.
I have found that even sites that contain json data do not provide
consistent (across sites) methods of fetching the data. It is
worse when the data are not as organized as json data, but it is
distributed in unique ways for the specific site.
Once you get a Gopher/Gemini browser, among yt-dlp, the web can go away.

Try these under lynx:

gopher://magical.fish
gopher://gopherddit.com
gopher://sdf.org
gopher://hngopher.com

gemini://gemi.dev (head to news waffle)

Magical Fish it's a HUGE portal and even a 386 would be
able to use the services. You have a news source,
a translator, stock prices, weather, wikipedia over gopher,
Gutenberg, torrent search...

Have fun.
D
2025-01-13 09:46:53 UTC
Reply
Permalink
Post by Bozo User
Post by root
Post by Computer Nerd Kev
I don't know about Emacs, but for TUI browsers with Javascript
support ELinks is one that I'm aware of. However like the
experimental JS support in Netsurf it doesn't seem to be advanced
enough to be useful (although unlike Netsurf, ELinks uses Mozilla's
SpiderMonkey JS engine, so I'm not exactly sure what makes it so
difficult to get right).
I regard ELinks as worthless. At best, I hope it is a work in
progress. I haven't tried Netsurf, but I have tried implementing,
via jsdom, specific fetch routines for different sites of interest.
I have found that even sites that contain json data do not provide
consistent (across sites) methods of fetching the data. It is
worse when the data are not as organized as json data, but it is
distributed in unique ways for the specific site.
Once you get a Gopher/Gemini browser, among yt-dlp, the web can go away.
gopher://magical.fish
gopher://gopherddit.com
gopher://sdf.org
gopher://hngopher.com
gemini://gemi.dev (head to news waffle)
Magical Fish it's a HUGE portal and even a 386 would be
able to use the services. You have a news source,
a translator, stock prices, weather, wikipedia over gopher,
Gutenberg, torrent search...
Have fun.
I imagine it would be very easy to write scripts to pull in what ever
regular www site you might like and move it to gopher. I found it sad that
gemini came into being and split the energies between gopher and gemini.

I will have to remember magical.fish. Gohper works beautifully in links!
Computer Nerd Kev
2025-01-13 20:52:03 UTC
Reply
Permalink
Post by D
Post by Bozo User
Once you get a Gopher/Gemini browser, among yt-dlp, the web can go away.
gopher://magical.fish
gopher://gopherddit.com
gopher://sdf.org
gopher://hngopher.com
gemini://gemi.dev (head to news waffle)
Magical Fish it's a HUGE portal and even a 386 would be
able to use the services. You have a news source,
a translator, stock prices, weather, wikipedia over gopher,
Gutenberg, torrent search...
Have fun.
I imagine it would be very easy to write scripts to pull in what ever
regular www site you might like and move it to gopher.
If it has a friendly API and that doesn't change every month. I
notice Gopherddit.com is broken, it just says "Subreddit not found"
for everything. Not that I care to read Reddit anyway.
Post by D
I will have to remember magical.fish. Gohper works beautifully in links!
No Gopher support in Links, I guess you mean ELinks or Lynx.
--
__ __
#_ < |\| |< _#
D
2025-01-14 17:54:15 UTC
Reply
Permalink
Post by Computer Nerd Kev
Post by D
Post by Bozo User
Once you get a Gopher/Gemini browser, among yt-dlp, the web can go away.
gopher://magical.fish
gopher://gopherddit.com
gopher://sdf.org
gopher://hngopher.com
gemini://gemi.dev (head to news waffle)
Magical Fish it's a HUGE portal and even a 386 would be
able to use the services. You have a news source,
a translator, stock prices, weather, wikipedia over gopher,
Gutenberg, torrent search...
Have fun.
I imagine it would be very easy to write scripts to pull in what ever
regular www site you might like and move it to gopher.
If it has a friendly API and that doesn't change every month. I
notice Gopherddit.com is broken, it just says "Subreddit not found"
for everything. Not that I care to read Reddit anyway.
Post by D
I will have to remember magical.fish. Gohper works beautifully in links!
No Gopher support in Links, I guess you mean ELinks or Lynx.
This is correct. I meant elinks. Apologies for the confusion.
Ivan Shmakov
2025-01-16 07:55:45 UTC
Reply
Permalink
[Cross-posting to news:comp.infosystems.www.misc just in case, but
setting Followup-To: comp.misc so as to keep the thread there.]
Post by Bozo User
Once you get a Gopher/Gemini browser, among yt-dlp, the web can go away.
While I do appreciate the availability of yt-dlp, I feel like
a huge part of the reason Chromium is huge is so it can support
Youtube. Granted, there doesn't seem to be as many DSAs for
video software (codecs and players) [1], but it's still the
kind of software I'd rather keep at least in a container.

[1] news:linux.debian.announce.security

(Not that I see much reason to listen to a video blogger talk
for fifteen minutes to convey the same information I can get
from five minutes of reading in the first place. A relative
of mine watches most of videos at double speed, but I don't
have that kind of fast listening skill myself, alas.)

Perhaps more important is that the Web can be understood as
a bunch of interlinked resources identified by URIs. And even
though modern browsers might fail to handle some of them,
traditional ones (like Lynx and, reportedly, SeaMonkey) still
support things like news:, mailto:, gopher:, and even ftp:.
Post by Bozo User
gopher://magical.fish
gopher://gopherddit.com
gopher://sdf.org
gopher://hngopher.com
gemini://gemi.dev (head to news waffle)
By the by, what's the equivalent of wget(1) for gopher:?

I understand that a lot of website operators don't care about
making their sites easy to download (and some, like the
aforementioned Youtube, try their best to make downloading hard,
for reasons), but I still care about downloading them regardless.

Of course, I try to make my own webpages compatible with
"wget -p"; e. g.:

http://am-1.org/~ivan/qinp-2021/096.sys.en.xhtml
http://am-1.org/~ivan/qinp-2024/112.l-system.en.xhtml

(I intend to implement Rsync access at some point as well,
though no concrete plan ATM.)
Post by Bozo User
Magical Fish it's a HUGE portal and even a 386 would be able to use
the services.
I do, in fact, have a Am386 box on my LAN with Lynx on it, but
it won't work as I don't do NAT, preferring an application level
gateway, Polipo, instead. (Reasoning vaguely along the lines
that I'd rather have a proxy crash, than kernel.) Polipo, though,
only supports HTTP; as well as CONNECT, but Lynx can't use that
for accessing gopher:. (Squid provides HTTP access to ftp: and,
IIRC, gopher:, but it's been a decade since I've last ran it.)
Post by Bozo User
You have a news source, a translator, stock prices, weather,
wikipedia over gopher, Gutenberg, torrent search...
Is Wikipedia over gopher any better in Lynx than Wikipedia over
HTTP? Same for Gutenberg.
Computer Nerd Kev
2025-01-16 21:10:03 UTC
Reply
Permalink
Post by Ivan Shmakov
Post by Bozo User
Once you get a Gopher/Gemini browser, among yt-dlp, the web can go away.
While I do appreciate the availability of yt-dlp, I feel like
a huge part of the reason Chromium is huge is so it can support
Youtube. Granted, there doesn't seem to be as many DSAs for
video software (codecs and players) [1], but it's still the
kind of software I'd rather keep at least in a container.
You fear that a hacker can upload a YouTube video containing an
exploit and manage to pass that exploit through YouTube's
transcoding in order to attack Linux video player programs? Seems
like a big stretch to me.
Post by Ivan Shmakov
By the by, what's the equivalent of wget(1) for gopher:?
Curl supports Gopher. Not Gemini though.
--
__ __
#_ < |\| |< _#
yeti
2025-01-17 04:16:19 UTC
Reply
Permalink
Post by Computer Nerd Kev
Curl supports Gopher. Not Gemini though.
Ncat and Netcat (check the existence of '-c' and '-T') can fetch stuff
from Gemini servers:

------------------------------------------------------------------------
$ printf 'gemini://geminiprotocol.net/\r\n' \
| ncat --ssl geminiprotocol.net 1965 | less
------------------------------------------------------------------------

------------------------------------------------------------------------
$ printf 'gemini://geminiprotocol.net/\r\n' \
| nc -c -T noverify geminiprotocol.net 1965 | less
------------------------------------------------------------------------

Wrapping that in some hands full of AWK to find links and iterate over
them should not require deep magic.

Some browsers capable of accessing gemini: can save the fetched files'
and gemini pages' source, maybe they would even be easier to integrate
in own scripts?

TL;DR: There is no showstopper.
--
Trust me, I know what I'm doing...
Ivan Shmakov
2025-01-18 14:05:40 UTC
Reply
Permalink
Post by Computer Nerd Kev
Post by Bozo User
Once you get a Gopher/Gemini browser, among yt-dlp, the web can go away.
I. e., my point being: you can't escape web by switching to
Gopher, because Gopher /is/ web. (Even if 'darker' part of it.)
Post by Computer Nerd Kev
While I do appreciate the availability of yt-dlp, I feel like a
huge part of the reason Chromium is huge is so it can support
Youtube. Granted, there doesn't seem to be as many DSAs for video
software (codecs and players), but it's still the kind of software
I'd rather keep at least in a container.
You fear that a hacker can upload a YouTube video containing
an exploit and manage to pass that exploit through YouTube's
transcoding in order to attack Linux video player programs?
Seems like a big stretch to me.
I'm not familiar with how Youtube processes its videos; I've
never even uploaded anything there myself, much less looked at
their sources for security issues that might or might not be
there.

(I do have experience with Wikimedia Commons, and I'm reasonably
certain that while they offer processed versions of the user
uploads, they still keep the originals in publicly accessible
locations on their servers. Why, I distinctly recall uploading
a fixed version of someone else's malformed SVG file there.)

Neither do I have any idea how opposed they would be to requests
from companies to introduce such security issues deliberately.
(I believe such hypothetical business entities are usually
referred to as "MAFIAA" in colloquial speech, but I can't help
but note that the company that pioneered the approach was in
fact Sony [1].)

[1] http://duckduckgo.com/html/?kd=-1&q="sony"+rootkit+controversy

And even were I to believe for videos downloaded from Youtube
to never ever have any potential security flaw whatsoever,
having two copies of video player software installed, one
within and one without container, would still be ill-advised,
if only for the reason that I might use an out-of-container
install for a potentially unsafe, non-Youtube video by accident.
Post by Computer Nerd Kev
By the by, what's the equivalent of wget(1) for gopher:?
Curl supports Gopher. Not Gemini though.
Curl is my tool of choice for doing API calls; say (JFTR, [2]
has a couple of complete examples):

$ curl -iv --form-string comment="New file." \
-F file=@my.jpeg -F text=\</dev/fd/5 5< my.jpeg.mw \
--form-string filesize="$(wc -c < my.jpeg)" \
--form-string token="1337cafe+\\" \
... -- https://commons.wikimedia.org/w/api.php\
"?action=upload&format=xml&assert=user"

[2] http://am-1.org/~ivan/src/examples-2024/webwatch.mk

However, I distinctly recall finding it inadequate as a mirroring
tool back in the day. (Though that might've changed meanwhile.)

And similarly for yeti's comment in [3]: I try to share what I know
with others. Such as on IRC. So, suppose someone asks on IRC,
"how do I get an offline copy of gopher://example.com/?"

"You can easily write your own Gopher / Gemini recursive
downloader yourself" is not something I'd be comfortable giving
as an answer, TBH. (Though I /would/ be comfortable with
providing assistance if someone explicitly asks for help with
writing one in the first place.)

[3] news:***@tilde.institute
Computer Nerd Kev
2025-01-18 23:09:15 UTC
Reply
Permalink
Post by Ivan Shmakov
Post by Computer Nerd Kev
Post by Bozo User
Once you get a Gopher/Gemini browser, among yt-dlp, the web can go away.
I. e., my point being: you can't escape web by switching to
Gopher, because Gopher /is/ web. (Even if 'darker' part of it.)
Post by Computer Nerd Kev
While I do appreciate the availability of yt-dlp, I feel like a
huge part of the reason Chromium is huge is so it can support
Youtube. Granted, there doesn't seem to be as many DSAs for video
software (codecs and players), but it's still the kind of software
I'd rather keep at least in a container.
You fear that a hacker can upload a YouTube video containing
an exploit and manage to pass that exploit through YouTube's
transcoding in order to attack Linux video player programs?
Seems like a big stretch to me.
I'm not familiar with how Youtube processes its videos; I've
never even uploaded anything there myself, much less looked at
their sources for security issues that might or might not be
there.
The files I download from YouTube always contain the metadata
string (in both audio and video streams):
"ISO Media file produced by Google Inc."

But I always use the lowest quality option.
Post by Ivan Shmakov
Post by Computer Nerd Kev
By the by, what's the equivalent of wget(1) for gopher:?
Curl supports Gopher. Not Gemini though.
Curl is my tool of choice for doing API calls; say (JFTR, [2]
$ curl -iv --form-string comment="New file." \
--form-string filesize="$(wc -c < my.jpeg)" \
--form-string token="1337cafe+\\" \
... -- https://commons.wikimedia.org/w/api.php\
"?action=upload&format=xml&assert=user"
[2] http://am-1.org/~ivan/src/examples-2024/webwatch.mk
However, I distinctly recall finding it inadequate as a mirroring
tool back in the day. (Though that might've changed meanwhile.)
That's true, Curl doesn't do mirroring. Command-line options may
exist, but one I'm aware of is that the Gopherus Gopher client
since version 1.2 has the feature "all files from current folder
can be downloaded by pressing F10". Not comparable to Wget's
recursive mode, but enough for some tasks.
--
__ __
#_ < |\| |< _#
candycanearter07
2025-01-29 20:10:03 UTC
Reply
Permalink
[snip]
Post by Computer Nerd Kev
The files I download from YouTube always contain the metadata
"ISO Media file produced by Google Inc."
Weird. I think yt-dlp has an option to overwrite the metadata with info
about the video itself?
--
user <candycane> is generated from /dev/urandom
Lawrence D'Oliveiro
2025-02-04 21:42:08 UTC
Reply
Permalink
Post by candycanearter07
The files I download from YouTube always contain the metadata string
"ISO Media file produced by Google Inc."
Weird. I think yt-dlp has an option to overwrite the metadata with info
about the video itself?
This *is* info about the video itself.
Ben Collver
2025-01-19 14:47:24 UTC
Reply
Permalink
Post by Ivan Shmakov
I. e., my point being: you can't escape web by switching to
Gopher, because Gopher /is/ web. (Even if 'darker' part of it.)
I realize that the distinction between the web and the Internet can be
confusing. At one point Microsoft labeled the web browser desktop icon
"The Internet". At another point Gmail made it mainstream to do email
in a web browser.

<https://www.getmyos.com/upload/files/2018/10/05/
windows_95_screenshot_1_1_bedc52f3b61686c533b5b318405508a6.png>

Below is a link explaining the difference between the web and the
Internet.

<https://askleo.com/whats-the-difference-between-the-web-and-
the-internet/>

In short, gopher is not the web. It does not use the HTTP protocol, the
HTML format, nor other web standards such as Javascript. Gopher is a
separate protocol that is not directly viewable in mainstream browsers
such as Chrome and Mozilla.
yeti
2025-01-19 15:32:36 UTC
Reply
Permalink
Post by Ben Collver
In short, gopher is not the web. It does not use the HTTP protocol,
the HTML format, nor other web standards such as Javascript. Gopher
is a separate protocol that is not directly viewable in mainstream
browsers such as Chrome and Mozilla.
I contradict.

When browsers appeared, we thought of the web as what was accessible
by them. FTP, HTTP and Gopher were among this in the early days.

Gopher is not the web. Yes.

HTTP is not the web!

They just are part of the web.

Today's big$$$-browsers converge to single protocol network file viewers
and unluckily the smallweb browsers do too.

Let's prefer multi protocol browsers and return to all goof stuff being
just a click away from each.

That was what the web was meant to be and we should make it exactly that
again.

First step: Prefer writing plugins for existing browsers over creating
more single protocol file viewers.

Writing plugins for Chawan (TUI) and Dillo (GUI) is easy. If you can
say that about other browsers too, let's start a list/FAQ in
comp.infosystems (the protocol independent group please) about it.
--
I do not bite, I just want to play.
Sn!pe
2025-01-19 16:05:36 UTC
Reply
Permalink
Post by yeti
Post by Ben Collver
In short, gopher is not the web. It does not use the HTTP protocol,
the HTML format, nor other web standards such as Javascript. Gopher
is a separate protocol that is not directly viewable in mainstream
browsers such as Chrome and Mozilla.
I contradict.
When browsers appeared, we thought of the web as what was accessible
by them. FTP, HTTP and Gopher were among this in the early days.
Gopher is not the web. Yes.
HTTP is not the web!
They just are part of the web.
If by "the web" you mean *The Internet*, I would agree. However, to me
"the web" means HTML browsers running HTTP. Call me old fashioned if
you like but IMO what you call "the web" is only *part* of The Internet.

Yes, HTML browsers have supplanted many earlier protocols, embraced
them and made them its own, but still it is only the web, just a part of
The Internet, not its entirety.


[what follows is left for context]
Post by yeti
Today's big$$$-browsers converge to single protocol network file viewers
and unluckily the smallweb browsers do too.
Let's prefer multi protocol browsers and return to all goof stuff being
just a click away from each.
That was what the web was meant to be and we should make it exactly that
again.
First step: Prefer writing plugins for existing browsers over creating
more single protocol file viewers.
Writing plugins for Chawan (TUI) and Dillo (GUI) is easy. If you can
say that about other browsers too, let's start a list/FAQ in
comp.infosystems (the protocol independent group please) about it.
--
^Ï^. Sn!pe, PTB, FIBS My pet rock Gordon just is.
Ivan Shmakov
2025-01-19 19:15:29 UTC
Reply
Permalink
Newsgroups: comp.infosystems,comp.misc
I took the liberty to disregard the crosspost.
Post by Ben Collver
In short, gopher is not the web. It does not use the HTTP protocol,
the HTML format, nor other web standards such as Javascript. Gopher
is a separate protocol that is not directly viewable in mainstream
browsers such as Chrome and Mozilla.
Gopher resources are indeed not directly viewable in /modern/
browsers, so I can agree they're not part of /modern/ web.

From where I stand, they're still part of the web at large.

As an aside, who decides what is or is not a /web/ standard?
If the suggestion is to only consider official W3C TRs as
"web standards proper" then, well, HTML is currently maintained
by WHATWG, not W3C; and HTTP/1.1 is IETF RFC 9112 / IETF STD 99.
I contradict.
When browsers appeared, we thought of the web as what was accessible
by them. FTP, HTTP and Gopher were among this in the early days.
Gopher is not the web. Yes.
HTTP is not the web!
They just are part of the web.
Today's big$$$-browsers converge to single protocol network file
viewers and unluckily the smallweb browsers do too.
That's how I see it as well. I've been using Lynx for over
two decades now, and I have no trouble using it to read HTML
documents (local or delivered over HTTP/1; provided, of course,
they are documents, rather than Javascript programs wrapped
in HTML, as is not uncommon today), gopherholes, or Usenet
articles (such as news:***@tilde.institute I'm
responding to.) It "just works."

I have no trouble understanding the difference between the web
proper and Internet as the technology it relies upon, either.

DNS is not web because even though it's essential for the web
as it is today to work, you can't point your browser, modern or
otherwise, to a DNS server, a DNS zone, or even an individual DNS
resource record (even though your browser /will/ request one from
your local recursive resolver, or its DNS-over-HTTP equivalent,
when you point it to a URL with a DNS name in that, be that
http://example.net/ or nntp://news.example.com/comp.misc .)

NTP is not web for much the same reason: there're no URIs for NTP
servers or individual NTP packets. Neither are there URIs for
currently active TCP connections or UDP datagrams or IP hosts.

There /are/ URIs for email mailboxes (mailto:***@example.net)
to send mail to, and phone numbers (tel:) to call, though.

To summarize, from a purely practical PoV, if you can access it
from /your/ browser, it is part of /your/ web. From a conceptual
PoV, I'd define "web" as a collection of interlinked resources
identified by their URIs. So, if it has an URI and that URI is
mentioned somewhere on the web, it's part of the web too.

Modern web is important because that's often where the people
you can talk to are. But non-modern portions of the web could
be just as important, especially if it's where most of the
people you /actually/ talk to are. Such as news:comp.misc .
Ben Collver
2025-01-20 15:37:07 UTC
Reply
Permalink
Post by yeti
Post by Ben Collver
In short, gopher is not the web. It does not use the HTTP protocol,
the HTML format, nor other web standards such as Javascript. Gopher
is a separate protocol that is not directly viewable in mainstream
browsers such as Chrome and Mozilla.
I contradict.
When browsers appeared, we thought of the web as what was accessible
by them. FTP, HTTP and Gopher were among this in the early days.
In the dawn of the Internet some people used a service called FTPmail
because it could be faster and cheaper to transfer data over email
than over direct Internet connections. By your logic, one could argue
that FTP is email because it was historically used in email clients.
One could also argue that because when browsers appeared, they could
view HTML content over the Server Message Block protocol, that CIFS
is also the web. Such arguments strike me as disingenuous.
Ivan Shmakov
2025-01-24 18:45:05 UTC
Reply
Permalink
Post by Ben Collver
Post by yeti
Post by Ben Collver
In short, gopher is not the web. It does not use the HTTP protocol,
the HTML format, nor other web standards such as Javascript. Gopher
is a separate protocol that is not directly viewable in mainstream
browsers such as Chrome and Mozilla.
There's a variety of formats that modern browsers allow viewing
directly, in addition to (X)HTML. Such as WebM; e. g. (URI split
for readability; tr -d \\n before use):

http://upload.wikimedia.org/wikipedia/commons/2/22/
%C2%AB%D0%9C%D0%B0%D1%81%D1%82%D0%B5%D1%80
-%D0%A2%D1%83%D0%BD%D0%BA%D0%B0%C2%BB
_%D0%BE%D1%82%D0%BA%D1%80%D1%8B%D0%B2%D0%B0%D0%B5%D1%82%D1%81%D1%8F%2C
_2020-011_092050.webm

Given the lack of hyperlinking in WebM, I'd hesitate to call
such a file a "webpage." SVG does support hyperlinks, however,
so I don't see much reason myself to be opposed to SVG webpages.
Post by Ben Collver
Post by yeti
I contradict.
When browsers appeared, we thought of the web as what was accessible
by them. FTP, HTTP and Gopher were among this in the early days.
For instance, per http://en.wikipedia.org/wiki/NCSA_Mosaic :

W> Mosaic is based on the libwww library and thus supported a wide
W> variety of Internet protocols included in the library: Archie, FTP,
W> gopher, HTTP, NNTP, telnet, WAIS.

My understanding is that Lynx' retains the libwww codebase to
this day, hence its support for a variety of web protocols well
beyond the modern notion of HTTP(S)-only web.

Call me old-fashioned, but my understanding of what "web" is
/is/ heavily influenced by the example of Mosaic.
Post by Ben Collver
In the dawn of the Internet some people used a service called FTPmail
because it could be faster and cheaper to transfer data over email
than over direct Internet connections. By your logic, one could argue
that FTP is email because it was historically used in email clients.
What I think you're referring to falls under the concept of a
/gateway./ There used to be servers that you'd send a web URI
via email to, get it downloaded by a batch web client (such as
Wget) there, and get the result delivered to you in a response
email. Possibly over a cheaper, high-latency link, such as UUCP.

(Wouldn't make as much sense to request a JPEG this way, only
to download it later over POP3 over SLIP, Base64 and all, now
would it?)

It's not dissimilar to how one can read netnews articles via
http://al.howardknight.net/ . By itself, that doesn't make
netnews a part of web, nor does it make HTTP a netnews protocol
(even if it /is/ used in this case for netnews transmission.)

Also, "email client" is a misnomer. An email user agent
would commonly act as /two/ clients: an ESMTP client for mail
submission, and, say, an IMAP client for mailbox access.

A modern MUA, such as Thunderbird, would also embed a web browser
so it can display HTML parts in email /as well as/ images
referenced in those parts, including those that need retrieval
over HTTP. Hence HTTP client being /also/ part of the so-called
"email client." (Even though its use would typically be disabled
for privacy reasons.)

Conversely, a traditional MUA, such as BSD mailx(1), would
contain /no/ network client code within at all, relying instead
on system facilities, such as the conventional sendmail(1) MTA
entrypoint. (Or a program like esmtp(1) posing as one.)
And (or) a program like fetchmail(1) or mbsync(1).

Curiously enough, email transmission between hosts was
originally implemented on top of the FTP protocol; consider, e. g.:

rfc475> This paper describes my understanding of the results of the
rfc475> Network Mail System meeting SRI-ARC on February 23, 1973, and
rfc475> the implications for FTP (File Transfer Protocol). There was
rfc475> general agreement at the meeting that network mail function
rfc475> should be within FTP.

rfc475> FTP currently provides two commands for handling mail. The MAIL
rfc475> command allows a user to send mail via the TELNET connection
rfc475> (the server collects the mail and determines its end by
rfc475> searching for the character sequence "CRLF.CRLF"). The MLFL
rfc475> (mail file) command allows a user to send mail via the data
rfc475> connection (requires a user-FTP to handle the command but
rfc475> transfer is more efficient as server need not search for
rfc475> a special character sequence). [...]

Not only this predates the transition from Transmission Control
/Program/ ("IPv3") to Transmission Control Protocol + Internet
Protocol (TCP/IPv4), but apparently even the first (?) formal
specification of the former in 1974:

rfc-index> 0675 Specification of Internet Transmission Control Program.
rfc-index> V. Cerf, Y. Dalal, C. Sunshine. December 1974.
rfc-index> (Obsoleted by RFC7805) (Status: HISTORIC)
rfc-index> (DOI: 10.17487/RFC0675)

The dawn of the Internet, indeed.
Post by Ben Collver
One could also argue that because when browsers appeared, they could
view HTML content over the Server Message Block protocol, that CIFS
is also the web. Such arguments strike me as disingenuous.
I'm not aware of such browsers, aside of the fact that some
Windows-based ones have allowed \\host\path syntax in place
of proper URLs. I doubt that aside of the syntax, the browser
had any SMB/CIFS client code within itself, however.

So far in this thread, I see two possible definitions of the
web: one I've suggested that boils down to "documents with
hyperlinks based on URI syntax and semantics", and the other,
that to me sounds like "what Google says." (I don't see Mozilla
as a major driving force behind the web this day and age.)

And I /do/ understand why Google would push for HTTP(S)-only
web (even with "HTTP" now being expanded to mean /three/ similar
in concept, but otherwise mutually incompatible protocols.)
And I won't envy any Google manager who'll have to explain to
the investors a decision that lowers the profits in the short
term, and hardly promises any tangible benefits later, such as
the decision to add (and take responsibility maintaining) a
Gopher client to the browser.

I do not understand why people outside of Google have to be
bound by the decisions of their management, however. The web
browser I use supported Gopher since before Google existed;
I fail to see why "Google saying so" has to be a sufficient
reason to at once stop deeming the protocol part of the web.

As to the definition I've suggested, I could only add the
requirement for the relevant protocol(s) to have at least two
independent implementations.

My understanding is that Gopher does have such implementations.
No idea about CIFS, but given (if Wikipedia [1] is to be believed)
that Microsoft has never made good use of it, it sounds doubtful.

Hence: not web.

[1] http://en.wikipedia.org/wiki/Server_Message_Block#CIFS
n***@zzo38computer.org.invalid
2025-01-20 19:23:24 UTC
Reply
Permalink
Post by yeti
When browsers appeared, we thought of the web as what was accessible
by them. FTP, HTTP and Gopher were among this in the early days.
Many browsers can also display local files (which is not internet), and
many newer ones can display PDF files (whether or not they are accessed
by the internet), too, though.
Post by yeti
Today's big$$$-browsers converge to single protocol network file viewers
and unluckily the smallweb browsers do too.
Some of the small web browsers do support multiple protocols and multiple
file formats. Unfortunately the major web browsers do not support such
things very well even if you add extensions, though; and they have many
other problems too other than just this, anyways.
Post by yeti
Let's prefer multi protocol browsers and return to all goof stuff being
just a click away from each.
That was what the web was meant to be and we should make it exactly that
again.
First step: Prefer writing plugins for existing browsers over creating
more single protocol file viewers.
I think that it should be done, although you can still make up new browsers
that may support such plugins too. I also think that the protocols and the
file formats should be handled separately, so there will be one plugin for
Gemini protocol and one plugin for Gemini file format (although they will
probably be a part of the same package, since they are used together), and
one plugin for Spartan protocol (which also uses Gemini file format so you
do not need a separate plugin for Spartan file format), etc.
--
Don't laugh at the moon when it is day time in France.
Dave Yeo
2025-01-17 02:04:31 UTC
Reply
Permalink
Ivan Shmakov wrote:
...
Post by Ivan Shmakov
And even
though modern browsers might fail to handle some of them,
traditional ones (like Lynx and, reportedly, SeaMonkey) still
support things like news:, mailto:, gopher:, and even ftp:.
My SeaMonkey needs an extension to handle gopher. I have Dooble, a QT
browser, chromium based, that does do gopher. Someone tried to convince
the author to support Gemini but didn't make a good enough case for it.
Post by Ivan Shmakov
Post by Bozo User
gopher://magical.fish
gopher://gopherddit.com
gopher://sdf.org
gopher://hngopher.com
gemini://gemi.dev (head to news waffle)
By the by, what's the equivalent of wget(1) for gopher:?
Curl seems to work for gopher.
Dave
yeti
2024-12-05 05:52:51 UTC
Reply
Permalink
Post by candycanearter07
But does it support JS?
EWW?

------------------------------------------------------------------------
Although EWW and shr.el do their best to render webpages in GNU Emacs
some websites use features which can not be properly represented or are
not implemented (e.g., JavaScript).
------------------------------------------------------------------------
( (eww.info)Basics )
--
I do not bite, I just want to play.
yeti
2025-01-16 11:00:29 UTC
Reply
Permalink
I haven't yet managed to get JS (and Sixels) running with Elinks, but
there is:

<https://sr.ht/~bptato/chawan/>

JS works at least a bit, maybe just enough for Gitea?

<https://dev1galaxy.org/viewtopic.php?pid=53922#p53922>

Despite allowing JS and cookies I couldn't use Google[0].

Fossil's menu does open with JS disabled, but I cannot select stuff in
there. With JS allowed it doesn't even open.

<https://www.fossil-scm.org>

I see frequent changes in Chawan, so maybe this is the one to watch now
and maybe tomorrow stuff that glitches today may be working.

____________

[0]: But meh ... there are alternatives[1].

[1]: DDG
<https://duckduckgo.com/>
FrogFind
<http://www.frogfind.com/>
--
4. Hitchhiker 11:
(72) "Watch the road!'' she yelped.
(73) "Shit!"
Anssi Saari
2024-11-28 10:45:46 UTC
Reply
Permalink
Post by Retrograde
Doing everything from the terminal just isn’t viable for me, mostly because I
didn’t grow up with it.
I guess I was lucky, I was exposed to a bewildering variety of computers
as I grew up in the 80s. There was the myriad of home computers, a lot
of Commodores and Speccys but also Sharps and MSXs and whatever. Some
CP/M machines at school, there were also some early Windows PCs there,
then the GUIs like Atari ST and Amiga's Workbench, sometimes Macs.

90s, I went to the University. They had MS-DOS PCs and text terminals
connected to Unix machines. Some Sun and HP Unix workstations too but
those were for more advanced students only for which I got access
later. Funny contrast, in '91 I got a summer job in a university
department which was all Macs. Looking back, it seems so radical that I
had dual displays and a "huge" 17" monitor to work on way back
then. Even if the other display was the minimal one integrated to the
boxy Mac.

In the meantime, my home computing went from a Commodore 64 to MS-DOS
PC, then dual booting that with OS/2 and some Linux experiments. Games
went to Windows so that MS-DOS became Windows 98 and XP and 7 and
10. Late 90s Linux experiments became permanent when I learned of Debian
Stable. OS/2 disappeared when picking supported hardware for it got too
tiresome.

Work, started mid-90s with Sun Unix workstations until I was kicked to
Office land. That was an awful time and when I escaped, it's been much
the same, Windows PC on the desk, Unix and later Linux server
somewhere. Oh, one job actually provided a Linux workstation under the
desk in addition to a Windows laptop but that was one time.

But to the topic, text only in 2024? I don't think so. Web browsing and
email, just no. Sure I just used Lynx on a Linux server at work to check
the proxy settings are correct and I do use mutt to teach misses to my
spam filter but that's pretty much it. For me, the email I get is HTML
with pictures from commercial sources. Very little personal
correspondence over email these days and mailing lists I get via NNTP
and gmane.
Bozo User
2025-01-12 23:01:24 UTC
Reply
Permalink
Post by Retrograde
Title: Using (only) a Linux terminal for my personal computing in 2024
Author: Thom Holwerda
Date: Sun, 24 Nov 2024 22:13:32 +0000
Link: https://www.osnews.com/story/141194/using-only-a-linux-terminal-for-my-personal-computing-in-2024/
A month and a bit ago, I wondered if I could cope with a terminal-only
computer[1].
[…]
The only way to really find out was to give it a go.
My goal was to see what it was like to use a terminal-only computer for my
personal computing for two weeks, and more if I fancied it.
↫ Neil’s blog[2]
I tried to do this too, once.
Once.
Doing everything from the terminal just isn’t viable for me, mostly because I
didn’t grow up with it. Our family’s first computer ran MS-DOS (with a Windows
3.1 installation we never used), and I’m pretty sure the experience of using
MS-DOS as my first CLI ruined me for life. My mental model for computing didn’t
start forming properly until Windows 95 came out, and as such, computing is
inherently graphical for me, and no matter how many amazing CLI and TUI
applications are out there – and there are many, many amazing ones – my brain
just isn’t compatible with it.
There are a few tasks I prefer doing with the command line, like updating my
computers or editing system files using Nano, but for everything else I’m just
faster and more comfortable with a graphical user interface. This comes down to
not knowing most commands by heart, and often not even knowing the options and
flags for the most basic of commands, meaning even very basic operations that
people comfortable using the command line do without even thinking, take me
ages.
I’m glad any modern Linux distribution – I use Fedora KDE on all my computers –
offers both paths for almost anything you could do on your computer, and unless
I specifically opt to do so, I literally – literally literally – never have to
touch the command line.
[1]: https://neilzone.co.uk/2024/10/could-i-cope-with-a-terminal-only-computer/ (link)
[2]: https://neilzone.co.uk/2024/11/using-only-a-linux-terminal-for-my-personal-computing-in-2024/ (link)
In my case, I use cwm+uxterm+a bunch of cli/tui apps, such as profanity,
catgirl, mocp... and the only X software I use are sxiv, mpv and mupdf.
Oh, and GV for a random PostScript file. That's it.

If you can I can post my setup. It's megafast.
Ah, no, I forgot: xload and xlock, which just lie there.
Anyway, it's like an advanced terminal from a different future.
Salvador Mirzo
2025-01-13 01:03:06 UTC
Reply
Permalink
Bozo User <***@disroot.org> writes:

[...]
Post by Bozo User
In my case, I use cwm+uxterm+a bunch of cli/tui apps, such as profanity,
catgirl, mocp... and the only X software I use are sxiv, mpv and mupdf.
Oh, and GV for a random PostScript file. That's it.
I too run cwm+uxterm! But then I add the GNU EMACS on top.

Thanks for mentioning mupdf---fast and nice. I wonder if it can display
the outline of a pdf (if available).
D
2025-01-13 09:48:17 UTC
Reply
Permalink
Post by Salvador Mirzo
[...]
Post by Bozo User
In my case, I use cwm+uxterm+a bunch of cli/tui apps, such as profanity,
catgirl, mocp... and the only X software I use are sxiv, mpv and mupdf.
Oh, and GV for a random PostScript file. That's it.
I too run cwm+uxterm! But then I add the GNU EMACS on top.
Thanks for mentioning mupdf---fast and nice. I wonder if it can display
the outline of a pdf (if available).
I use qpdf. Has sessions, and is fairly light weight.
Salvador Mirzo
2025-01-13 19:24:27 UTC
Reply
Permalink
Post by D
Post by Salvador Mirzo
[...]
Post by Bozo User
In my case, I use cwm+uxterm+a bunch of cli/tui apps, such as profanity,
catgirl, mocp... and the only X software I use are sxiv, mpv and mupdf.
Oh, and GV for a random PostScript file. That's it.
I too run cwm+uxterm! But then I add the GNU EMACS on top.
Thanks for mentioning mupdf---fast and nice. I wonder if it can display
the outline of a pdf (if available).
I use qpdf. Has sessions, and is fairly light weight.
Wonderful! Pretty nice as well. Very easy to use. Now, it can't seem
to use lpr for printing? That's how I print. :) But I can workaround it
by figuring out how to tell lpr to tell my printer to only print a few
pages I'm interested in and then use the command line. Thanks for
mentioning qpdf.
D
2025-01-14 17:50:51 UTC
Reply
Permalink
Post by Salvador Mirzo
Post by D
Post by Salvador Mirzo
[...]
Post by Bozo User
In my case, I use cwm+uxterm+a bunch of cli/tui apps, such as profanity,
catgirl, mocp... and the only X software I use are sxiv, mpv and mupdf.
Oh, and GV for a random PostScript file. That's it.
I too run cwm+uxterm! But then I add the GNU EMACS on top.
Thanks for mentioning mupdf---fast and nice. I wonder if it can display
the outline of a pdf (if available).
I use qpdf. Has sessions, and is fairly light weight.
Wonderful! Pretty nice as well. Very easy to use. Now, it can't seem
to use lpr for printing? That's how I print. :) But I can workaround it
by figuring out how to tell lpr to tell my printer to only print a few
pages I'm interested in and then use the command line. Thanks for
mentioning qpdf.
You're welcome! =)
Salvador Mirzo
2025-01-16 01:10:38 UTC
Reply
Permalink
Salvador Mirzo <***@example.com> writes:

[...]
Post by Salvador Mirzo
Post by D
I use qpdf. Has sessions, and is fairly light weight.
Wonderful! Pretty nice as well. Very easy to use. Now, it can't seem
to use lpr for printing? That's how I print. :) But I can workaround it
by figuring out how to tell lpr to tell my printer to only print a few
pages I'm interested in and then use the command line. Thanks for
mentioning qpdf.
I suspect I imagine wrong how things actually work. I thought perhaps
there would be a command line such as ``lpr --pages 7-14''. Now I
believe a program like evince generates a PostScript of the pages you
asked it to and then sends this complete PostScript document of the
pages you requested to a pipe or file on disk that lpr sends to the
printer. So, if qpdf doesn't do the same, I'm out of luck in terms of
printing with lpr. But I think I can find a program that takes page
ranges and transformations like scaling and produces a PostScript
document that I can send to lpr, so I can use qpdfview and use the
command line to print stuff out.
Rich
2025-01-16 04:15:53 UTC
Reply
Permalink
Post by Salvador Mirzo
[...]
Post by Salvador Mirzo
Post by D
I use qpdf. Has sessions, and is fairly light weight.
Wonderful! Pretty nice as well. Very easy to use. Now, it can't seem
to use lpr for printing? That's how I print. :) But I can workaround it
by figuring out how to tell lpr to tell my printer to only print a few
pages I'm interested in and then use the command line. Thanks for
mentioning qpdf.
I suspect I imagine wrong how things actually work. I thought perhaps
there would be a command line such as ``lpr --pages 7-14''. Now I
believe a program like evince generates a PostScript of the pages you
asked it to and then sends this complete PostScript document of the
pages you requested to a pipe or file on disk that lpr sends to the
printer.
Yes, selecting "which pages" happens before the result gets sent to lpr
(or cups).
Post by Salvador Mirzo
But I think I can find a program that takes page ranges and
transformations like scaling and produces a PostScript document that
I can send to lpr, so I can use qpdfview and use the command line to
print stuff out.
If you are dealing with pdf files, then pdftk
<https://en.wikipedia.org/wiki/PDFtk> works very well of doing various
transforms on pdf files (including selecting a subset of pages, that do
not have to all be contiguous).

If you have actual postscript files, you can use ghostscript from the
command line to "distill" them to pdf (note ghostscrpts "pdfwrite"
output driver) and then use pdftk for further transforming.
Computer Nerd Kev
2025-01-16 05:58:27 UTC
Reply
Permalink
Post by Salvador Mirzo
Post by Salvador Mirzo
Wonderful! Pretty nice as well. Very easy to use. Now, it can't seem
to use lpr for printing? That's how I print. :) But I can workaround it
by figuring out how to tell lpr to tell my printer to only print a few
pages I'm interested in and then use the command line. Thanks for
mentioning qpdf.
I suspect I imagine wrong how things actually work. I thought perhaps
there would be a command line such as ``lpr --pages 7-14''. Now I
believe a program like evince generates a PostScript of the pages you
asked it to and then sends this complete PostScript document of the
pages you requested to a pipe or file on disk that lpr sends to the
printer. So, if qpdf doesn't do the same, I'm out of luck in terms of
printing with lpr. But I think I can find a program that takes page
ranges and transformations like scaling and produces a PostScript
document that I can send to lpr, so I can use qpdfview and use the
command line to print stuff out.
If you want a Postscript file of a page range from a PDF, convert the
PDF to Postscript first then use psselect from psutils. Or use the
"save marked" function in gv, which I personally use as my default
PDF viewer.
--
__ __
#_ < |\| |< _#
Lawrence D'Oliveiro
2025-01-21 05:31:45 UTC
Reply
Permalink
I thought perhaps there would be a command line such as
``lpr --pages 7-14''.
<https://manpages.debian.org/lp(1)>:

-P page-list
Specifies which pages to print in the document. The list can
contain a list of numbers and ranges (#-#) separated by
commas, e.g., "1,3-5,16". The page numbers refer to the output
pages and not the document's original pages - options like
"number-up" can affect the numbering of the pages.
Ivan Shmakov
2025-01-23 19:33:36 UTC
Reply
Permalink
I suspect I imagine wrong how things actually work. I thought
perhaps there would be a command line such as ``lpr --pages 7-14''.
As has already been pointed in this thread, CUPS, a fairly
common choice for a printer spooler in GNU/Linux systems,
provides lp(1) command that does have just such an option.
Now I believe a program like evince generates a PostScript of
the pages you asked it to and then sends this complete PostScript
document of the pages you requested to a pipe or file on disk
that lpr sends to the printer.
AIUI, traditional lpd(8) / lpr(1) do require the file to be
preprocessed in such a way before it is submitted for printing,
but even then, they do /not/ require for the file to be
PostScript: it's possible to setup the respective filters to
accept other formats, such as PDF.
So, if qpdf doesn't do the same, I'm out of luck in terms of
printing with lpr. But I think I can find a program that takes
page ranges and transformations like scaling and produces a
PostScript document that I can send to lpr, so I can use qpdfview
and use the command line to print stuff out.
I'm not too familiar with qpdf(1) (and I don't think I've ever
used qpdfview [*]), but it does have a --pages option. E. g.:

$ qpdf --empty --pages in.pdf 5-8 -- out.pdf
$ qpdf in.pdf --pages . 5-8 -- out.pdf

(The second variant preserves the input document metadata,
which isn't probably of much use for printing anyway.)

... A somewhat little-known fact is that once uncompressed, PDF
is largely a text file (perhaps unsurprising, given it comes
from the same company that created PostScript), though employing
byte offsets rather unrestrictedly.

qpdf(1) has a --qdf option that undoes compressesion and annotates
the file in such a way that the companion fix-qdf program can
fix the byte offsets, at least in certain cases, thus allowing the
PDF file to be edited with a text editor. (Though probably using
a library, such as PDF::API2 for Perl, would be more practical
than trying to, say, adapt sed(1) for automated edits in this case.)

[*] Given a choice, I tend to prefer HTML. If the document I'm
interested in is only available in a PDF version, I tend to
use pdftotext(1). If that fails to produce a legible version,
I resort to Zathura, preferring it mostly for its UI.
Salvador Mirzo
2025-02-12 16:12:42 UTC
Reply
Permalink
Post by Ivan Shmakov
I suspect I imagine wrong how things actually work. I thought
perhaps there would be a command line such as ``lpr --pages 7-14''.
As has already been pointed in this thread, CUPS, a fairly
common choice for a printer spooler in GNU/Linux systems,
provides lp(1) command that does have just such an option.
Thanks for the information. It turns out I'm not being able to print
two-sided-long-edge with CUPS and my Brother HL-L2360DW. I resorted to
using /etc/printcap and lpd's lpr (not CUPS's lpr) because I can then
set my printer to always do two-sided-long-edge, which is nearly 100% of
the way I print.
Post by Ivan Shmakov
Now I believe a program like evince generates a PostScript of
the pages you asked it to and then sends this complete PostScript
document of the pages you requested to a pipe or file on disk
that lpr sends to the printer.
AIUI, traditional lpd(8) / lpr(1) do require the file to be
preprocessed in such a way before it is submitted for printing,
but even then, they do /not/ require for the file to be
PostScript: it's possible to setup the respective filters to
accept other formats, such as PDF.
That's what I did as well. I use a filter called ps2pcl.

lp|remote|brother|Brother HL-L2360DW:\
:lp=***@BRWB052162167A6:\
:if=/usr/local/libexec/ps2pcl:\
:sh:sd=/var/spool/output/lpd:\
:lf=/var/log/lpd-errs:

But today I learned that the Brother HL-L2360DW supports PostScript and
I was able to set it up that way with CUPS. I just don't use because I
never want two-sided-short-edge or one-sided, which is all I can get
with CUPS for whatever reason.
Post by Ivan Shmakov
So, if qpdf doesn't do the same, I'm out of luck in terms of
printing with lpr. But I think I can find a program that takes
page ranges and transformations like scaling and produces a
PostScript document that I can send to lpr, so I can use qpdfview
and use the command line to print stuff out.
I'm not too familiar with qpdf(1) (and I don't think I've ever
Turns out qpdfview is a pretty usable PDF viewer and it's the one I'm
using the most here. I think qpdfview is the closest thing to
SumatraPDF (on Windows), my favorite.
Post by Ivan Shmakov
$ qpdf --empty --pages in.pdf 5-8 -- out.pdf
$ qpdf in.pdf --pages . 5-8 -- out.pdf
Thanks! That works.
Post by Ivan Shmakov
(The second variant preserves the input document metadata,
which isn't probably of much use for printing anyway.)
Good to know. Sometimes we produce PDF for screen viewing.
Post by Ivan Shmakov
... A somewhat little-known fact is that once uncompressed, PDF
is largely a text file (perhaps unsurprising, given it comes
from the same company that created PostScript), though employing
byte offsets rather unrestrictedly.
qpdf(1) has a --qdf option that undoes compressesion and annotates
the file in such a way that the companion fix-qdf program can
fix the byte offsets, at least in certain cases, thus allowing the
PDF file to be edited with a text editor. (Though probably using
a library, such as PDF::API2 for Perl, would be more practical
than trying to, say, adapt sed(1) for automated edits in this case.)
You seem to know a lot about PostScript. So here's a question. When I
want to print two-sided-long-edge, is that a command included in the
PostScript document itself (and then sent to the printer)?
Jerry Peters
2025-02-16 20:55:04 UTC
Reply
Permalink
Post by Salvador Mirzo
Post by Ivan Shmakov
I suspect I imagine wrong how things actually work. I thought
perhaps there would be a command line such as ``lpr --pages 7-14''.
As has already been pointed in this thread, CUPS, a fairly
common choice for a printer spooler in GNU/Linux systems,
provides lp(1) command that does have just such an option.
Thanks for the information. It turns out I'm not being able to print
two-sided-long-edge with CUPS and my Brother HL-L2360DW. I resorted to
using /etc/printcap and lpd's lpr (not CUPS's lpr) because I can then
set my printer to always do two-sided-long-edge, which is nearly 100% of
the way I print.
Sounds like an incorrect PPD, which is where the various options come
from.
I have a HL220dw and CUPS supports both simplex and duplex printing,
selectable at the time I print.
Salvador Mirzo
2025-02-17 01:54:34 UTC
Reply
Permalink
Post by Jerry Peters
Post by Salvador Mirzo
Post by Ivan Shmakov
I suspect I imagine wrong how things actually work. I thought
perhaps there would be a command line such as ``lpr --pages 7-14''.
As has already been pointed in this thread, CUPS, a fairly
common choice for a printer spooler in GNU/Linux systems,
provides lp(1) command that does have just such an option.
Thanks for the information. It turns out I'm not being able to print
two-sided-long-edge with CUPS and my Brother HL-L2360DW. I resorted to
using /etc/printcap and lpd's lpr (not CUPS's lpr) because I can then
set my printer to always do two-sided-long-edge, which is nearly 100% of
the way I print.
Sounds like an incorrect PPD, which is where the various options come
from.
I have a HL220dw and CUPS supports both simplex and duplex printing,
selectable at the time I print.
Awesome news. I've tried hacking my PPD a file a bit, but
unsuccessfully. I've reported my attempts to

comp.unix.bsd.freebsd.misc.

Would you be so kind to share your PPD? I could perhaps get more clues
seeing one PPD file that really works. I have suspected mine could be
faulty, but I know so little about PPDs and PostScript. My greatest
insight so far is that the PPD file houses small PostScript snippets
that PostScript-generating software that use to make the printer do one
thing or another. Here's my full PPD in use. FWIW:

*PPD-Adobe: "4.3"
*%%%% PPD file for HL-L2360D series with CUPS.
*%%%% Created by the CUPS PPD Compiler CUPS v2.4.10.
*FormatVersion: "4.3"
*FileVersion: "6"
*LanguageVersion: English
*LanguageEncoding: ISOLatin1
*PCFileName: "brl2360d.ppd"
*Product: "(HL-L2360D series)"
*Manufacturer: "Brother"
*ModelName: "Brother HL-L2360D series"
*ShortNickName: "Brother HL-L2360D series"
*NickName: "Brother HL-L2360D series, using brlaser v6"
*PSVersion: "(3010.000) 0"
*LanguageLevel: "3"
*ColorDevice: False
*DefaultColorSpace: Gray
*FileSystem: False
*Throughput: "1"
*LandscapeOrientation: Plus90
*TTRasterizer: Type42
*% Driver-defined attributes...
*1284DeviceID: "MFG:Brother;CMD:PJL,PCL,PCLXL,URF;MDL:HL-L2360D series;CLS:PRINTER;CID:Brother Laser Type1;URF:W8,CP1,IS4-1,MT1-3-4-5-8,OB10,PQ4,RS300-600,V1.3,DM1;"
*cupsBackSide: "Rotated"
*cupsVersion: 2.4
*cupsModelNumber: 0
*cupsManualCopies: False
*cupsFilter: "application/vnd.cups-raster 33 rastertobrlaser"
*cupsLanguages: "en"
*OpenUI *PageSize/Media Size: PickOne
*OrderDependency: 10 AnySetup *PageSize
*DefaultPageSize: A4
*PageSize A4/A4: "<</PageSize[595 842]/ImagingBBox null>>setpagedevice"
*PageSize A5/A5: "<</PageSize[420 595]/ImagingBBox null>>setpagedevice"
*PageSize A6/A6: "<</PageSize[297 420]/ImagingBBox null>>setpagedevice"
*PageSize B5/JIS B5: "<</PageSize[516 729]/ImagingBBox null>>setpagedevice"
*PageSize B6/JIS B6: "<</PageSize[363 516]/ImagingBBox null>>setpagedevice"
*PageSize EnvC5/Envelope C5: "<</PageSize[459 649]/ImagingBBox null>>setpagedevice"
*PageSize EnvMonarch/Envelope Monarch: "<</PageSize[279 540]/ImagingBBox null>>setpagedevice"
*PageSize EnvDL/Envelope DL: "<</PageSize[312 624]/ImagingBBox null>>setpagedevice"
*PageSize Executive/Executive: "<</PageSize[522 756]/ImagingBBox null>>setpagedevice"
*PageSize Legal/US Legal: "<</PageSize[612 1008]/ImagingBBox null>>setpagedevice"
*PageSize Letter/US Letter: "<</PageSize[612 792]/ImagingBBox null>>setpagedevice"
*CloseUI: *PageSize
*OpenUI *PageRegion/Media Size: PickOne
*OrderDependency: 10 AnySetup *PageRegion
*DefaultPageRegion: A4
*PageRegion A4/A4: "<</PageSize[595 842]/ImagingBBox null>>setpagedevice"
*PageRegion A5/A5: "<</PageSize[420 595]/ImagingBBox null>>setpagedevice"
*PageRegion A6/A6: "<</PageSize[297 420]/ImagingBBox null>>setpagedevice"
*PageRegion B5/JIS B5: "<</PageSize[516 729]/ImagingBBox null>>setpagedevice"
*PageRegion B6/JIS B6: "<</PageSize[363 516]/ImagingBBox null>>setpagedevice"
*PageRegion EnvC5/Envelope C5: "<</PageSize[459 649]/ImagingBBox null>>setpagedevice"
*PageRegion EnvMonarch/Envelope Monarch: "<</PageSize[279 540]/ImagingBBox null>>setpagedevice"
*PageRegion EnvDL/Envelope DL: "<</PageSize[312 624]/ImagingBBox null>>setpagedevice"
*PageRegion Executive/Executive: "<</PageSize[522 756]/ImagingBBox null>>setpagedevice"
*PageRegion Legal/US Legal: "<</PageSize[612 1008]/ImagingBBox null>>setpagedevice"
*PageRegion Letter/US Letter: "<</PageSize[612 792]/ImagingBBox null>>setpagedevice"
*CloseUI: *PageRegion
*DefaultImageableArea: A4
*ImageableArea A4/A4: "8 8 587 826"
*ImageableArea A5/A5: "8 8 412 579"
*ImageableArea A6/A6: "8 8 289 404"
*ImageableArea B5/JIS B5: "8 8 508 713"
*ImageableArea B6/JIS B6: "8 8 355 500"
*ImageableArea EnvC5/Envelope C5: "8 8 451 633"
*ImageableArea EnvMonarch/Envelope Monarch: "8 8 271 524"
*ImageableArea EnvDL/Envelope DL: "8 8 304 608"
*ImageableArea Executive/Executive: "8 8 514 740"
*ImageableArea Legal/US Legal: "8 8 604 992"
*ImageableArea Letter/US Letter: "8 8 604 776"
*DefaultPaperDimension: A4
*PaperDimension A4/A4: "595 842"
*PaperDimension A5/A5: "420 595"
*PaperDimension A6/A6: "297 420"
*PaperDimension B5/JIS B5: "516 729"
*PaperDimension B6/JIS B6: "363 516"
*PaperDimension EnvC5/Envelope C5: "459 649"
*PaperDimension EnvMonarch/Envelope Monarch: "279 540"
*PaperDimension EnvDL/Envelope DL: "312 624"
*PaperDimension Executive/Executive: "522 756"
*PaperDimension Legal/US Legal: "612 1008"
*PaperDimension Letter/US Letter: "612 792"
*OpenUI *Resolution/Resolution: PickOne
*OrderDependency: 10 AnySetup *Resolution
*DefaultResolution: 600dpi
*Resolution 600dpi/600 DPI: "<</HWResolution[600 600]/cupsBitsPerColor 1/cupsRowCount 0/cupsRowFeed 0/cupsRowStep 0/cupsColorSpace 3>>setpagedevice"
*Resolution 1200dpi/1200HQ: "<</HWResolution[1200 1200]/cupsBitsPerColor 1/cupsRowCount 0/cupsRowFeed 0/cupsRowStep 0/cupsColorSpace 3>>setpagedevice"
*CloseUI: *Resolution
*OpenUI *InputSlot/Media Source: PickOne
*OrderDependency: 10 AnySetup *InputSlot
*DefaultInputSlot: Auto
*InputSlot Auto/Auto-select: "<</MediaPosition 0>>setpagedevice"
*InputSlot Tray1/Tray 1: "<</MediaPosition 1>>setpagedevice"
*InputSlot Tray2/Tray 2: "<</MediaPosition 2>>setpagedevice"
*InputSlot Tray3/Tray 3: "<</MediaPosition 3>>setpagedevice"
*InputSlot MPTray/MP Tray: "<</MediaPosition 4>>setpagedevice"
*InputSlot Manual/Manual: "<</MediaPosition 5>>setpagedevice"
*CloseUI: *InputSlot
*OpenUI *MediaType/Media Type: PickOne
*OrderDependency: 10 AnySetup *MediaType
*DefaultMediaType: PLAIN
*MediaType PLAIN/Plain paper: "<</MediaType(PLAIN)/cupsMediaType 0>>setpagedevice"
*MediaType THIN/Thin paper: "<</MediaType(THIN)/cupsMediaType 1>>setpagedevice"
*MediaType THICK/Thick paper: "<</MediaType(THICK)/cupsMediaType 2>>setpagedevice"
*MediaType THICKER/Thicker paper: "<</MediaType(THICKER)/cupsMediaType 3>>setpagedevice"
*MediaType BOND/Bond paper: "<</MediaType(BOND)/cupsMediaType 4>>setpagedevice"
*MediaType TRANS/Transparencies: "<</MediaType(TRANS)/cupsMediaType 5>>setpagedevice"
*MediaType ENV/Envelopes: "<</MediaType(ENV)/cupsMediaType 6>>setpagedevice"
*MediaType ENV-THICK/Thick envelopes: "<</MediaType(ENV-THICK)/cupsMediaType 7>>setpagedevice"
*MediaType ENV-THIN/Thin envelopes: "<</MediaType(ENV-THIN)/cupsMediaType 8>>setpagedevice"
*CloseUI: *MediaType
*OpenUI *brlaserEconomode/Toner save mode: Boolean
*OrderDependency: 10 AnySetup *brlaserEconomode
*DefaultbrlaserEconomode: False
*brlaserEconomode False/Off: "<</cupsInteger10 0>>setpagedevice"
*brlaserEconomode True/On: "<</cupsInteger10 1>>setpagedevice"
*CloseUI: *brlaserEconomode
*OpenUI *Duplex/2-Sided Printing: PickOne
*OrderDependency: 10 AnySetup *Duplex
*DefaultDuplex: None
*Duplex None/Off (1-Sided): "<</Duplex false>>setpagedevice"
*Duplex DuplexNoTumble/Long-Edge (Portrait): "<</Duplex true/Tumble true>>setpagedevice"
*Duplex DuplexTumble/Short-Edge (Landscape): "<</Duplex true/Tumble true>>setpagedevice"
*CloseUI: *Duplex
*DefaultFont: Times-Roman
*Font AvantGarde-Book: Standard "(1.05)" Standard ROM
*Font AvantGarde-BookOblique: Standard "(1.05)" Standard ROM
*Font AvantGarde-Demi: Standard "(1.05)" Standard ROM
*Font AvantGarde-DemiOblique: Standard "(1.05)" Standard ROM
*Font Bookman-Demi: Standard "(1.05)" Standard ROM
*Font Bookman-DemiItalic: Standard "(1.05)" Standard ROM
*Font Bookman-Light: Standard "(1.05)" Standard ROM
*Font Bookman-LightItalic: Standard "(1.05)" Standard ROM
*Font Courier: Standard "(1.05)" Standard ROM
*Font Courier-Bold: Standard "(1.05)" Standard ROM
*Font Courier-BoldOblique: Standard "(1.05)" Standard ROM
*Font Courier-Oblique: Standard "(1.05)" Standard ROM
*Font Helvetica: Standard "(1.05)" Standard ROM
*Font Helvetica-Bold: Standard "(1.05)" Standard ROM
*Font Helvetica-BoldOblique: Standard "(1.05)" Standard ROM
*Font Helvetica-Narrow: Standard "(1.05)" Standard ROM
*Font Helvetica-Narrow-Bold: Standard "(1.05)" Standard ROM
*Font Helvetica-Narrow-BoldOblique: Standard "(1.05)" Standard ROM
*Font Helvetica-Narrow-Oblique: Standard "(1.05)" Standard ROM
*Font Helvetica-Oblique: Standard "(1.05)" Standard ROM
*Font NewCenturySchlbk-Bold: Standard "(1.05)" Standard ROM
*Font NewCenturySchlbk-BoldItalic: Standard "(1.05)" Standard ROM
*Font NewCenturySchlbk-Italic: Standard "(1.05)" Standard ROM
*Font NewCenturySchlbk-Roman: Standard "(1.05)" Standard ROM
*Font Palatino-Bold: Standard "(1.05)" Standard ROM
*Font Palatino-BoldItalic: Standard "(1.05)" Standard ROM
*Font Palatino-Italic: Standard "(1.05)" Standard ROM
*Font Palatino-Roman: Standard "(1.05)" Standard ROM
*Font Symbol: Special "(001.005)" Special ROM
*Font Times-Bold: Standard "(1.05)" Standard ROM
*Font Times-BoldItalic: Standard "(1.05)" Standard ROM
*Font Times-Italic: Standard "(1.05)" Standard ROM
*Font Times-Roman: Standard "(1.05)" Standard ROM
*Font ZapfChancery-MediumItalic: Standard "(1.05)" Standard ROM
*Font ZapfDingbats: Special "(001.005)" Special ROM
*% End of brl2360d.ppd, 08482 bytes.
Salvador Mirzo
2025-02-17 01:56:39 UTC
Reply
Permalink
Post by Salvador Mirzo
Post by Jerry Peters
Post by Salvador Mirzo
Post by Ivan Shmakov
I suspect I imagine wrong how things actually work. I thought
perhaps there would be a command line such as ``lpr --pages 7-14''.
As has already been pointed in this thread, CUPS, a fairly
common choice for a printer spooler in GNU/Linux systems,
provides lp(1) command that does have just such an option.
Thanks for the information. It turns out I'm not being able to print
two-sided-long-edge with CUPS and my Brother HL-L2360DW. I resorted to
using /etc/printcap and lpd's lpr (not CUPS's lpr) because I can then
set my printer to always do two-sided-long-edge, which is nearly 100% of
the way I print.
Sounds like an incorrect PPD, which is where the various options come
from.
I have a HL220dw and CUPS supports both simplex and duplex printing,
selectable at the time I print.
Awesome news. I've tried hacking my PPD a file a bit, but
unsuccessfully. I've reported my attempts to
comp.unix.bsd.freebsd.misc.
Would you be so kind to share your PPD? I could perhaps get more clues
seeing one PPD file that really works. I have suspected mine could be
faulty, but I know so little about PPDs and PostScript. My greatest
insight so far is that the PPD file houses small PostScript snippets
that PostScript-generating software that use to make the printer do one
*PPD-Adobe: "4.3"
[...]

Looks like something is line-wrapping my long lines. I don't think
that's my news reader---Gnus v5.13. Could it be Eternal September?
I've no idea.
Lawrence D'Oliveiro
2025-02-17 03:41:38 UTC
Reply
Permalink
Post by Salvador Mirzo
*Duplex DuplexNoTumble/Long-Edge (Portrait): "<</Duplex true/Tumble true>>setpagedevice"
*Duplex DuplexTumble/Short-Edge (Landscape): "<</Duplex true/Tumble true>>setpagedevice"
I know what’s wrong: the first line should be

*Duplex DuplexNoTumble/Long-Edge (Portrait): "<</Duplex true/Tumble false>>setpagedevice"

At least it is for my Epson. Does that work for you?
Salvador Mirzo
2025-02-19 16:02:19 UTC
Reply
Permalink
Post by Lawrence D'Oliveiro
Post by Salvador Mirzo
*Duplex DuplexNoTumble/Long-Edge (Portrait): "<</Duplex true/Tumble true>>setpagedevice"
*Duplex DuplexTumble/Short-Edge (Landscape): "<</Duplex true/Tumble true>>setpagedevice"
I know what’s wrong: the first line should be
*Duplex DuplexNoTumble/Long-Edge (Portrait): "<</Duplex true/Tumble false>>setpagedevice"
At least it is for my Epson. Does that work for you?
That's the original---what you're seeing was my trial-and-error hack,
which made no difference.
Scott Dorsey
2025-02-17 22:18:20 UTC
Reply
Permalink
Post by Salvador Mirzo
Post by Jerry Peters
Sounds like an incorrect PPD, which is where the various options come
from.
I have a HL220dw and CUPS supports both simplex and duplex printing,
selectable at the time I print.
Would you be so kind to share your PPD? I could perhaps get more clues
seeing one PPD file that really works. I have suspected mine could be
faulty, but I know so little about PPDs and PostScript. My greatest
insight so far is that the PPD file houses small PostScript snippets
that PostScript-generating software that use to make the printer do one
*PPD-Adobe: "4.3"
*%%%% PPD file for HL-L2360D series with CUPS.
*%%%% Created by the CUPS PPD Compiler CUPS v2.4.10.
This appears to be a PPD file for a different printer. See if CUPS has
available one for the actual printer you are using.
--scott
--
"C'est un Nagra. C'est suisse, et tres, tres precis."
Salvador Mirzo
2025-02-19 16:03:59 UTC
Reply
Permalink
Post by Scott Dorsey
Post by Salvador Mirzo
Post by Jerry Peters
Sounds like an incorrect PPD, which is where the various options come
from.
I have a HL220dw and CUPS supports both simplex and duplex printing,
selectable at the time I print.
Would you be so kind to share your PPD? I could perhaps get more clues
seeing one PPD file that really works. I have suspected mine could be
faulty, but I know so little about PPDs and PostScript. My greatest
insight so far is that the PPD file houses small PostScript snippets
that PostScript-generating software that use to make the printer do one
*PPD-Adobe: "4.3"
*%%%% PPD file for HL-L2360D series with CUPS.
*%%%% Created by the CUPS PPD Compiler CUPS v2.4.10.
This appears to be a PPD file for a different printer. See if CUPS has
available one for the actual printer you are using.
My printer, which is a Brother HL-L2360DW, is usually called as one that
belongs to the ``Brother HL-L2360D series''. The Windows driver, for
example, calls it just that. So I wouldn't think I'd find a more
specific driver. Thanks for the check!

Loading...