free software resistance
the cost of computing freedom is eternal vigilance
### learning-to-code
*originally posted:* nov 2024
### some kind of intro
if learning to code isnt fun, somethings wrong. it might be the language youre using. it could be your teacher.
i want to be clear about this, i dont (much) care how you learn if it works.
id rather avoid methods that are going to fail more people- some people can learn any language first, and thats honestly great for them.
i dont believe learning a certain way can taint you forever. dijkstra used to say it could, but he was talking about higher-level coding. higher-level coding is often easier, its more abstract, and its great for solving bigger problems.
dijkstras assumption was also based on anecdotal evidence, not scientific study. it was highly critical of basic as a first language, but before basic was improved in many of the ways he wanted to improve computer languages in general- and before so many writers of basic code became writers of better code.
lower-level coding- something many of dijkstras arguments cant really touch- is closer to what the cpu chip understands, its often more efficient, and its a lot more difficult for most people.
in the 1970s and 80s, talking directly to the computer (if it was an affordable 8-bit computer, often targeted at home users) was somewhat easier. the amount of computer to talk to was a lot less, the total number of commands needed to do so was fewer, the conceptual model you had to learn was simpler. it was still (and is still) very, very numeric. to the chip, everything is numbers. to talk to it directly in its own language, you have to use lots of numbers.
a lot of people think this means you need to be good at maths to learn how to code. im actually terrible at maths- i struggled with algebra, i barely know any trig and i was pretty slow with geometry. this is all pre-university stuff- not everyone believes im terrible with maths, but they happen to be wrong.
what im better with is breaking some things down into steps. if you ever wonder what the difference between an "algorithm" and code are, the algorithm is a description of steps to accomplish a task with a computer- you can "describe" it with code, but also with english. when those steps are implemented in a certain computer language, thats code.
to be certain, being good with maths helps when you want to write more complex algorithms, or when you want to make existing ones faster. but it isnt necessary to break things down into steps.
### a terrible first example
to give you an idea of where my skills with maths drop off, i spent many years wishing i knew the algorithm to plot a circle.
drawing a single dot is easy: each dot or "pixel" (picture element) on your screen has a number. typically these are addressed by starting at the top/left of the screen, just like english letters on paper.
most often the pixel is referred to by its horizontal or x-axis value, followed by the vertical or y-axis value (x, y). the top is 0 and the left is 0, so the top/left pixel is at (0, 0). to move right, you add to the first 0. to move down, you add to the second 0. ive been able to do that since i was 7 or 8, and most people can do this.
to plot a circle is more complicated. you have to start at -pi, but since pi is so complex that people are still working on it, you start at an approximation of pi. i typically use 3.14159, since thats 1. the number of digits of pi i have recognised, and 2. i never plot a circle large enough that this level of accuracy isnt enough. but you can get more digits by copying them, or by multiplying the arctangent of 1 by 4. i dont really know what that means! but in python its math.atan(1) * 4, so whatever.
you can also use math.pi which is easier, but in older versions of python atan(1) * 4 gives you more digits than math.pi does.
so you start at -pi, and add a very small number (try .001 to start with, try making it smaller to see what happens) over and over- until it is pi or more than pi.
then for each of those numbers, you take the cosine of that number and multiply by the size of your circle (aka the radius) to get the value of x. since many computer programs expect integer values for pixel coordinates, you have to remove anything after the decimal point, theres typically an int() function for that. to get the y value, you use sine instead of cosine.
so the algorithm looks like this:
1. keep adding a tiny number to -3.14159 (or math.pi or whatever)
2. x is cos(number) times the radius, converted to an integer
3. y is sin(number) times the radius, converted to an integer
4. stop when the sum (youve been adding to -pi, the result is the sum) is equal to or more than 3.14159 (or math.pi or whatever)
in my own programming language, you would write that like this:
```
size = 100
for p (-3.14159, 3.14159, .001)
x = p ; cos ; times size ; int
y = p ; sin ; times size ; int
next
```
that wont actually draw the circle of course, it will just calculate the values of where the pixel goes. to actually plot the circle, youd want pset:
```
size = 100
for p (-3.14159, 3.14159, .001)
x = p ; cos ; times size ; int
y = p ; sin ; times size ; int
now ; pset (x, y, 15)
next
```
my language is pretty forgiving. the only part of that code you must use is:
```
size 100
for p -3.14159 3.14159 .001
x p cos times size int
y p sin times size int
now pset x y 15
next
```
this is because the syntax (basically the rules, like grammar is to english) is inspired by logo, which is traditionally very easy to learn. its also inspired by basic, which is traditionally easy to learn. though this is closer to basic and still valid code:
```
size = 100
for p = -3.14159 3.14159 .001
x = p : cos : times size : int
y = p : sin : times size : int
now : pset(x, y), 15
next
```
this actually IS basic:
```
screen 1
size = 100
for p = -3.14159 to 3.14159 step .001
x = int(cos(p) * size)
y = int(sin(p) * size)
pset(x, y), 3
next
```
so is this:
```
screen 1
size = 100
circle(x, y), size, 3
```
theres a circle command already- so why bother learning how to plot a circle at all?
first of all, when i was 7 or 8, i didnt plot circles using cos() and sin(), i just used the circle command. but what if i want to draw a circle without changing to a graphics mode (thats what "screen 1" does, by the way) or what if i want to draw a circle, of smaller circles?
one of the main things i wanted to do was draw circles using text. some versions of basic (there are dozens or more) let you do that, many would require you to code that yourself. some versions of basic dont even have a circle command- if you know how to plot a circle, you can still code it yourself.
but thats not how i learned to code- and though the math has been explained to me, i dont use the methods necessary to calculate cosine and sine- i just use the commands for that built into most programming languages. sure, itd be cool to be able to calculate those. but unlike the circle algorithm, i havent found an application for that.
### a bit of history
anyway, i didnt learn coding by doing graphics- logo is better for that.
the truth is, even graphics in basic can intimidate a lot of people. i learned how to code before i went to computer camp, i played with simple graphics commands there- then a year or two later i taught myself how the graphics commands in basic worked.
oh no, i dont mean i learned how to plot a circle. i mean i memorised (most of) the circle command. and how to draw a rectangle with the line command.
ive met lots of people who had fun coding for more than a decade and didnt learn graphics commands, because they were intimidated by them. i dont think theyre really that bad, but i can see how people would feel that way.
in logo though, you can draw a circle as easily as drawing a square. its got ridiculously easy code, like this:
```
move right 10
move down 10
move left 10
move up 10
```
you dont have to know "how to code" to look at that and guess it draws a square. heres a slightly less intuitive version:
```
forward 10
turn 90
forward 10
turn 90
forward 10
turn 90
forward 10
turn 90
```
its the same thing 4 times- you can use something like "repeat 4" to avoid typing it over and over.
but what if you change the 10 to 1, and the 90 to just 1 or 2? then you repeat it 360 times?
you can basically draw a circle that way, although technically youre drawing an equilateral polygon with up to hundreds of sides. can you also do that with the circle algorithm? yes!
both basic and logo were developed in the 1960s. basic was aimed at university students who werent computer scientists, so "everyone else" could use computers too. and it was wildly successful- not only were uni students able to learn it, so were much younger students. logo was actually designed for children, and has been used to teach coding to many people as young as 7 and 8. i personally learned basic first, and not in school- but i picked up logo early on.
its odd that today, steve wozniak (inventor of the first apple computer) says that kids younger than 11 dont have the "logic" to learn coding, because it was kids younger than that who were learning logo in schools, on apple computers running a version of logo made for (if not by) apple. episode 6 of "bits and bytes" (from tvo in ontario, canada) from the 1980s has footage of such classes- and is still online.
but i think to some degree, we havent made learning to code easier. one of my favourite screwups in programming education is "javascript for kids!" because- WHY?!
### here be dragons
dont get me wrong, if you know either basic or logo, (but especially basic) then javascript MIGHT be a great SECOND language. heres some javascript if you want to see what it looks like:
=> ../gemwiki.js
heres an ugly bit of code from it, we can blame this entirely on me, i wrote it:
```
function linkproc(s) {
if (right(s.trim(), 4).toLowerCase() == ".gif" || right(s.trim(), 4).toLowerCase() == ".jpg" || right(s.trim(), 4).toLowerCase() == ".png" || right(s.trim(), 5).toLowerCase() == ".jpeg") {
return '';
}
return '' + fixlink(s.trim()) + '';
}
```
hideous, right? heres basically the same code written in python:
```
def linkproc(s):
if s.strip()[-4:].lower() == ".gif" or s.strip()[-4:].lower() == ".jpg" or s.strip()[-4:].lower() == ".png" or s.strip()[-5:].lower() == ".jpeg":
return ''
return '' + fixlink(s.strip()) + ''
```
some obvious differences:
1. python has "def" (short for "define function") instead of "function"- one is shorter to type, i actually chose "function" for my language because thats what its called in basic, too.
2. the horrendous toLowerCase() from javascript (its full of awful things like that) is just lower() in python. yes, you have to doThisToIt in javaScript, because javascriptIsHorrible().
3. trim() from js is strip() in python. its practically the same thing. basic has had rtrim and ltrim since before javascript existed, but not every dialect had trim so to do both, youd use rtrim(ltrim(string)) or if you prefer, ltrim(rtrim(string)). even in my language its s ; ltrim ; rtrim or if you prefer, s ; rtrim ; ltrim.
4. you may think javascript is cooler than python because what the heck is this[-4:] but hold on, how does the right() function in javascript actually work?
```
function right(s, l) {
return s.substring(s.length-l, s.length);
}
```
yeah, thats mine- but first of all youre very welcome to use it- and second, its so trivial that stopping you probably wouldnt be possible- and definitely shouldnt be.
javascript isnt for kids- its for overworked adults who are punishing themselves for something. but im not going to tell you not to use it if you really, really want to. it could be worse, you could be using wasm.
my problem with languages like javascript as a first language is twofold- first, its harder to learn javascript. it really is. if i knew grown men and women who were intimidated by simple graphics commands in basic, javascript is a lot worse. its so much worse. so is python sometimes, but ill get back to that.
and second, if people learn javascript as a first language, theyll think most languages are as awful as that. and most of them are! but not all of them, and no one needs to learn coding with the wrong idea that every language is as awful as javascript. some of them are really okay.
but even if you think im joking, (im not) the simple truth is that even python moves the age forward a bit for the youngest age most people would be able to learn coding.
you can teach logo to a 6 year old, no matter what woz says these days.
you can teach basic to a 7 or 8 year old, possibly younger.
you can teach python to a 9 or 10 year old, possibly younger. but 14? absolutely.
after people go to uni, if they still dont know any coding its going to be more work. by this time theyve learned that coding is hard to learn (its not) and theyre probably going to be given javascript (or worse) to learn first. but they might start out with python.
heres the thing about python- its not terrible. at least it wasnt. in fact, early python (through python 2, really) got its start based on other educational languages, particularly abc. abc is probably easy to learn, but more limited in usefulness and not necessarily as much fun.
python grew into a language used by industry, which led to a somewhat different language (many people will dispute this) by version 3. i find it more verbose, more tedious and much farther from basic. but i spent YEARS looking for a "21st century basic" and tried dozens of basic dialects looking for the best one combining traditional ease of use with modern power and features.
python 2 actually does this, and well. it does it so well, that going back to some of the "easy" features of basic (at its best) do not seem as friendly as they used to:
1. basic is traditionally static-typed and uses "sigils" to denote strings, later various levels of numeric type precision. sound like fun? its hideous. i missed it at first, when my string variables suddenly looked exactly like my integers (bah) and floats (yay). it took me many years to hate sigils for string variables.
2. basic does arrays SO poorly- which you know, given the kind of limitations earlier dialects had in terms of what they could work with anyway, is something we can forgive. but the way most dialects bolt this on is hideous too, and python is absolutely elegant by comparison. just to be clear, an array (in python a "list") is a single variable name tied to 0 or more values.
so like you can have a variable called pet and set it to "cat" but an array lets you have a variable called pets and you can have pets[0] set to "cat", also pets[1] set to "dog", pets[2] set to "lizard", arrays are very powerful. in basic, pets[1] would be pets(1) in some of the more popular or traditional dialects.
3. most dialects of basic are not as elegant OR simple when it comes to defining functions, calling functions, importing functions, using imported functions... so a lot of people learn to use "native" commands- which are friendlier, its true. but in python these things are much nicer than they are in typical basic dialects- and even easier.
# something somewhat practical
yeah, i really DONT miss basic.
but on the other hand, my own language (which i still use) started as a bit of a renegade basic-inspired dialect. heres a program i wrote with it a few weeks ago:
```
p arrstdin
forin each p
try
now each reverse
eachlen instr now "/" minus 1
ifmore eachlen 0
trimmed now left eachlen reverse print
next
except
now ""
next
next
```
heres the same program, indented with semicolons added- so the loops and conditionals and lines are a bit easier for a human to parse:
```
p arrstdin
forin each p
try
now = each ; reverse
eachlen = instr now "/" ; minus 1
ifmore eachlen 0
trimmed = now ; left eachlen ; reverse ; print
next
except
now ""
next
next
```
most of the workings are on these lines:
```
forin each p
now = each ; reverse
eachlen = instr now "/" ; minus 1
ifmore eachlen 0
trimmed = now ; left eachlen ; reverse ; print
next
next
```
p is just a collection of text lines that are sent TO the program to work with. the forin command "loops" through each of those- and with each line of text sent to the program:
1. the first line reverses the line
2. the second line gets the length of the line in characters, and subtracts 1 from it
3. the next three lines remove everything after the "first" (in practical terms last) "/" and flip it back around, then output it. but it skips this if the line is empty
what is this for? if i have lines of text that look like this:
```
./2024/chtm.fig
./2024/chtm.fig.html
./2024/getname.fig
./2024/getname.fig.html
```
it will remove anything before the filename, and just give me:
```
chtm.fig
chtm.fig.html
getname.fig
getname.fig.html
```
in python the code might look more like this:
```
from sys import stdin ; p = stdin.read().split(chr(10))
for each in p:
try:
each = map(str, each) ; each.reverse() ; each = "".join(each) ; now = each
eachlen = now.index("/") + 1
if eachlen > 0:
trimmed = now[:eachlen - 1] ; trimmed = map(str, trimmed) ; trimmed.reverse() ; trimmed = "".join(trimmed) ; now = trimmed ; print trimmed
except:
now = ""
```
thats closer to what my language would automatically translate it to.
by hand, that could be turned into this:
```
from sys import stdin ; p = stdin.read().split(chr(10))
for each in p:
try:
now = map(str, each) ; now.reverse()
eachlen = "".join(now).index("/") + 1
if eachlen > 0:
trimmed = map(str, now[:eachlen - 1]) ; trimmed.reverse() ; print "".join(trimmed)
except:
pass
```
and you could do like my language does and just have a function called "strreverse" so this line:
```
trimmed = map(str, now[:eachlen - 1]) ; trimmed.reverse() ; print "".join(trimmed)
```
becomes:
```
print strreverse(now[:eachlen - 1])
```
or even:
```
print strreverse(left(eachlen))
```
and by that point, pythons not that bad. with a few added functions (or a library including those functions) its getting closer to basic.
thats what i did a lot, before my own language included and translated this stuff for me.
### keeping a few languages simple
i still do python sometimes- earlier today i changed this line of my editor:
```
outfile.write(each + nl)
```
to this:
```
outfile.write((each + nl).encode('utf-8'))
```
why? because python 2 handles text differently than python 3. most of the time im happy it does, but when dealing with modern utf-encoded text, python 3 assumes youre going to use utf8 and python 2 doesnt. i would be absolutely fine with that, except getting it to stop doing the new default is pretty tricky and eluded me for years.
i once rewrote my language project to use python 3, used that for a few months, ran into a weird bug with the new version that i spent a while trying to figure out, and went back to the older version of my project and it worked.
after that, my language wasnt based on python 3 anymore. to be certain, theres no work dont on python 2 and you shouldnt rely on it for anything intended to be secure- not just this, python 2 may actually have bugs in it that make it dangerous to run some code. of course, you can run dangerous code in any version of python- but python 2 might also respond to data that is malicious and targets old vulnerabilities.
so should you write software for python 2? generally, no... certainly not for many professional purposes. what they DONT tell you is, theres a version of python 2 that IS still maintained, and much safer than the dangerous version.
just like an algorithm differs from an implementation in that one IS code but the other doesnt have to be, a language differs from its implementation in that the implementation probably SHOULD be code, but the specification doesnt have to be.
which means you can have a language called python, plus an implementation written in c.
cpython is the (relatively) dangerous one that no one works to keep safe anymore. and youre probably fine if you use it- depending on how extensive or industrial your application is. but techically you really shouldnt, and actually you could have major problems if lots of people had to rely on your code.
pypy however, only needs cpython once. pypy is maintained, so when you run coding using pypy, you can run your python 2 code and it is safe. the reason more people dont mention this is political, even commercial. and i absolutely hate it- i vastly prefer pypy 2 to cpython 3, and some of the justification for moving is less than completely honest- its based on fear and convenience for some people, dishonest and inconvenience for others.
and not just a little bit of disconvenience- python 3 is far enough from 2 that its killed or limited various projects and libraries. but whatever, ive worked on porting my language to lua (im not in love with it) and a c version (of my language) would be super cool to have, even if its beyond my ability to create.
c is a much harder language, its not entire low level but it sometimes feels that way, and its often used to write operating systems or programming languages- as well as various other programs.
### easy and less easy
so you might be wondering, is it possible to learn c as a first language?
sure. its possible. javascript will be easier to learn than c, and python (2, perhaps 3) is likely to be easier than javascript. and basic will probably be slightly easier than that.
i learned python 15 years ago, and the language i wrote and use more has nearly reached its 10th anniversary (february 2015). i use python when i need to modify existing code, whether its my code or someone else. somewhat rarely, i use the feature of my language that allows "inline" python code. and also rarely, i use python for working on something python is just better for- like my editor.
can you write a (good) editor in my educational language? yes, you could. would i want to?
not really. its honestly not great for that. i never intended it to be. could i add enough features that it was easier to make a version of my editor but have most of the features created with my language? yes, but its probably not worth doing that, even though i could accomplish it in a number of ways- even sparing any changes to the language itself.
its possible that doing that could be useful, and it would defintely make me more likely to write more features for the editor, but its more likely to just add overhead to keeping the project working as well as it does- or overhead to people understanding how the editor works. its a pretty tight script as it is, elegant or not and that functionality would definitely make it more complicated.
as it is, the editor requires a minimum of one file. if you add a second, optional file you can code new features on the fly- then you can use those new features you just coded without even restarting the program. python is cool that way. emacs lets you do this too, but this editor is so much simpler.
some of the coolest stuff i learned, i learned by taking apart existing programs- just like with prose, reading code is just as important as learning to write code. but youre probably going to understand code better if you have a good foundation, and by "good" i mean complete enough that you arent guessing what the basics are- because you know them.
even the older version 1.5, from the diff, gives a pretty good idea how trivial it is to get it working- 1.6 is meant to cater better to people who dont know python, and just want the code to run.
if you want to start learning when youre 8 or younger, youre more likely to do well if you start with something really simple.
when taught to young students, logo typically starts with graphics and doesnt do more. its not that logo doesnt have other features, but its a pretty unconventional language for any features other than graphics.
both basic and logo have been reinvented countless times over the decades, so one dialect or another will have different command names and other rules that differ substantially. most people will learn one, or a small handful of these dialects, if any. i only tried dozens because it took a long time to like something i liked as much.
i really always wanted a sort of basic language with syntax inspired by logo and wondered what that would look like. im pretty sure my own dialect gets reasonably close, either way that was major inspiration the design.
but when i started, the first code i wrote was silly, trivial stuff. a popular first language is just called "hello world" because all it does is say "hello world" on the screen. this is a useless program, obviously- it takes more code than just typing "hello world" does.
you could make a language where if you wrote a hello world program, it wouldnt take more than that- you would just write:
```
hello world
```
and the program would interpret that "code" as "i want to say hello world" and it would do it. but this would mostly miss the point of a hello world program- which is to see actual program code that does something trivial, as a demonstration.
### basic as a first language
this is an actual hello world program:
```
10 print "hello world"
```
and the next thing you typically learn is how to make it loop, which is what linus torvalds did in his own first basic program:
```
10 print "hello world"
20 goto 10
```
just to be clear, this is exacly what made dijkstra upset. if your loops are line number based, and your program is complicated, it leads to a complete mess. mainstream basic stopped having this limitation years ago, but the way around it is to use features that are more in line with the ideas that dijkstra helped create and popularise:
```
do
print "hello world"
loop
```
same program, only now it doesnt depend on which number is associated with the line.
linus torvalds learned basic as a kid, but it didnt stop him from writing the linux kernel. i think that makes a pretty big hole in dijkstras argument about basic tainting people. then again, im not the linux kernels biggest fan.
i used to like it more. either way theres absolutely no question that torvalds became a greater coder than i will ever be. hes a software developer , or engineer, hes famous. im a hobbyist. some people are kinder than that, but really, its alright. ive definitely known hobbyists that can code circles around me.
but in basic, after you learn how to cover every line with "hello world" you might want to start changing the colour:
```
10 print "hello world"
20 color 1, 7
30 print "hello world"
40 color 2, 7
50 goto 10
```
please, lets get away from line numbers now. its fun to know, the line numbers werent a feature of basic PER SE- they were actually a feature of the dartmouth timesharing system, or dtss- and fortran programs also used line numbers. but basic has a goto command, so it may be a feature of dtss, but goto makes a feature of basic. as it happens, many early dialects of basic also used line numbers- but the dtss version was the original. basic comes from dartmouth.
```
while
print "hello world"
color 1, 7
print "hello world"
color 2, 7
wend
```
while... wend, and do... loop are similar loops, but while... wend is older.
as much fun as line numbers are for nostalgic purposes, most people who have moved to languages that dont require them never want to use them again. the reason people generally number them 10, 20, 30 etc- it isnt required. they do that because its easier to insert other lines later- up to 9 of them- if you start with lines numbered this way. its a clever way to edit text if youre using a teletype, credit where credit is due. electronic screens made editing text a lot more user friendly.
electronic screens have been around longer than basic has, but they were incredibly expensive. you could typically buy one screen, or several teletypes. since the computers of the day were intended to have many people using the same computer at once (hence, "timesharing system") it made a huge difference that you could get many teletypes for the price of one screen.
eventually screens became more affordable, and they definitely save users a lot of paper. ink of course, is still made from martian spring water or something incredibly rare and expensive.
what about the code from the previous example, whats "color 2, 7" do?
the second number is the background colour, and the colours are numbered like this:
```
0 black 8 grey
1 blue 9 light blue
2 green 10 light green (lime)
3 cyan 11 light cyan
4 red 12 light red (not quite pink)
5 magenta 13 light magenta (basically pink)
6 brown 14 yellow
7 white 15 bright white
```
numbers between 8 and 15 were "high-intensity" colours and by default, didnt work as background colors. setting a background colour of 15 would get you a 7, if anything. in modes that let you change the palette, it was sometimes different- if the mode let you even have a background colour.
you really only had to memorise 8 colours, and then add 8 to get the high-intensity version. you could also just look it up, but if you did this often enough youd have a pretty good idea what the colours were.
in python you can have 16.7 million colours, a number you get by multiplying 256 times 256 times 256. there are 256 shades of red, 256 shades of green and 256 shades of blue that are possible for most screens. that gives you roughly 16.7 million combinations. this is less about python as a language, and more about what modern computers are capable of vs. what most computers could do when basic was more popular. but there are absolutely versions of basic that do 16.7 million colors.
"color 2, 7" sets a background color of white, and the text colour will be green. in later versions of basic, you could use the sleep command to make the loop slow down:
```
while
print "hello world"
color 1, 7
print "hello world"
color 2, 7
sleep 1
wend
```
maybe youd want to use locate y, x to change where on the screen the print command outputs next. it "prints" to the screen, but as alluded to earlier it used to "print" on a teletype with actual paper. the locate command used row, column which is the opposite of x, y in graphics commands. it just happens that the typical notation for graphics isnt the same as the rows and colums typically used for lines of text.
maybe youd want to use the rnd feature to make it choose a "random" number for the colour, between 0 and 15. or a random row and random column, using locate. to do random colours, you coulld do this:
```
while
print "hello world"
color int(rnd * 15), int(rnd * 7)
print "hello world"
color 2, 7
sleep 1
wend
```
you would probably add "randomize timer" to the top of the code, because the rnd feature would give you the same numbers every time, unless you gave it a numeric "seed" to base its random number function from. since "timer" was a number representing the number of seconds since midnight, using it as the seed would reliably make the numbers "feel" random enough.
if that confuses you, i never understood the way it worked exactly. i understood it just enough to see the difference in action. modern languages tend to seed the random number generator for you, and with my language you can just do this:
```
p = randint 0 15 ; colortext p
```
after you spend enough time playing with loops and output, you might want to start having the computer get input from you. maybe have it let you type in a colour, maybe use a conditional to change the colour to 1 if you type in blue:
```
while
print "what colour?"
input p$
if p$ = "blue" then color 1, 7
if p$ = "green" then color 2, 7
print "hello world"
wend
```
and if you get to this point, you are already understanding the basics of coding: output (like print), input (like the input command), loops, like while wend- and conditionals, if... then.
i would add: setting and referencing variables, defining and calling functions, and basic maths- addition, subtraction, multiplication, division.
the computer does the calculations for you, doing "maths" is about telling it what calculations to do.
variables let you share data between different parts of the program- for example, if you get keyboard input from one line of code, where does the input actually go? it sets a variable to whatever you typed, then to let the program use that input on another line it just references the value stored by that variable:
```
print "what is your name?"
input p$
print "hi "; p$
```
basic uses colons ":" to separate instructions on the same line. a semicolon used with a print statement tells it to stay on the same line when it prints- dont go to the next line yet.
### modern alternatives vs classics
this program asks your name, then replies "hi, your name". in my language it looks like this:
```
p = "what is your name?" ; print ; lineinput
now = "hi " ; prints
now = p ; print
```
first it sets the variable "p" to the value "what is your name?" and prints p.
then it sets p to the value it gets from lineinput (literally whatever the user types).
then it sets the variable "now" to the value "hi " and prints it.
the difference betwen prints and print is like the difference between print "hi" and print "hi";
prints is short for "print semicolon" or if you prefer, print "sameline".
then it sets the variable now to the value of p, what was typed. this copies a value from one variable to another.
then it prints the now variable.
most lines in my language have to start with a variable, which code on that line tends to share.
lines that dont have to (and actually forbid) starting with a variable include code that uses more than one line anyway, such as loops and conditionals- also function definitions.
but this is meant to reduce the number of rules you need to know, or to teach. its easy to use "p" or "now" as a throwaway variable when you dont need to reference that lines variable.
in basic, there were so many features that were not called like libraries.
libraries dont have to be written in the same language that calls them. you can write a library with features created in c, then they can be used by a program written in python.
the thing about native functions is, eventually it gets tedious to keep track of them. people dont know when theyve mastered the language. and you can fix that by making the language smaller. nonetheless, there are so, so many libraries you can load from a python program.
modern languages tend to support lots of large libraries- javascript will do- so to add features to a language, you only have to write a library for it and call it from your program.
its good design, but in a way its too bad. because so many things have to be imported first, python can look overly simple, until you figure out how to reference a library. its not that this isnt easy to learn, its whether people do before dismissing the choice of language.
one of the laments of traditionalists in the basic community is that so much attention is paid to library calls that it doesnt feel like coding or learning to code anymore. and while the whole point of libraries (not a bad thing) is to provide shortcuts and useful routines, the culture around modern programming libraries is less about establishing a routine and more about the latest feature, the latest way of doing something that worked well already. the constant tweaking is great for an industry built around constant construction, but it makes programming feel less creative and feel more like just grinding in a big video game title. thats going to appeal to people with that mindset, sometimes less to people who just want computing to be better for everyone.
small, simple stable libraries are a joy even if few exist. in a world where software was really about freedom, libraries would be easier to maintain so that when the industry moved onto the next big hype, people who loved the software for what it was (rather than profit, rather than bragging rights, rather than grinding) could keep that library alive and maintained. that better future is never going to come from the python foundation- theyre just salespeople making yet another exciting people project into a soulless microsoft product. far too many things are like that, but thats how microsoft (and ibm) has always worked. small, simple stable languages are a joy as well, and more could exist. they wont come from any industry publisher.
i didnt move to python right away, before 2009 i tried other things. 2007 was the year i got rid of my last windows install, moving to dsl on one machine and xubuntu on the other. my prority was getting things done on the new machines without getting rid of the software i relied on. i could run basic (you may think its silly, i really dont care) from dosbox, from qemu, from dosemu. i was still looking for a better way, but i didnt want to burn a bridge while crossing it. it was a priority to change the operating system, and replacing basic which id used for so many years wasnt easy.
many very large books about javascript existed, i didnt want them. an aquaintance of mine told me she loved javascript and used to love basic, but there were very specific things i wanted to do. in particular there was a project called stickwiki or wiki on a stick, which i thought was absolutely brilliant.
in basic and many languages suitable for the shell, the idea is to output either directly to the screen, or a virtual term or term window. the two main ways to address what youre outputting to are either to append a stream of data (in other words, just print things out) or to give control information to "move" to another part of the screen or change output. in dos and dos-based basic, this is done as if by magic using commands like locate, or changing output with the color command.
when you move to more unix-inspired systems, this approach takes a backseat and you may find yourself emulating those commands by outputting output sequences. it may seem alien to someone who started on an 8-bit computer, but control sequences exist on dos and as part of the ascii standard, which dos supports. ansi sequences start with character 27 and really work in a similar fashion to logo, even if the commands themselves are both ugly and cryptic in appearance. you can, and i do, wrap them in a friendly function with a nice name like "locate" or "colourtext", then call those like any other command. anyone is free to use those routines, they dont have to write them for themselves.
### javascript is sort of weird, but not for a gui
javascript however, doesnt do as well with this paradigm. instead of addressing the page like rows and columns, you still have the option of treating it like a stream- with document.write(). but what i prefer to do is create span tags with html, then have the code edit the innerHTML of the span. you can, and may prefer to use a div for this. what i like about the span tag is that, absent other span tags before or after it, it feels more like im addressing the entire page without changing the way it works. both a span and a div are technically like a virtual page on top of a page, but the span tag seems more like working with the page itself.
working with the page from javascript then, is more like modifying a large variable. you dont have to do it this way either- javascript has objects, you can just draw gui elements and address the way they work together. thats all youre doing when you create spans or divs, these become objects you can interface with on the screen. but with a span you can create links with html (by modifying the contents of the span) and clicking those links can tell the script to do things.
this is great for making a wiki, because the script can see whatever the text after # or ? is and act on it. you can create a text input element for typing text that the script can use. if youre using to less gui-oriented controls this can be a bit weird, but despite what i said about javascript its probably the least painful way to create a gui. the second-least painful is tk.
to see a gui made with javascript in action, quasi is the best example i have:
=> quasi/quasi30
if you click buttons on the "keyboard" control on the right, they will type into the box on the left. if you click view source, youll find there are 1200 lines of text including spaces. dont worry about that, use search to find the line that says:
```
javascript:cursorletter('q');
```
that line of code creates the q "key" on the keyboard. it also tells the key what to do when pressed (clicked)- i made this for touchscreens of course. cursorletter() is the function that handles input from this virtual keyboard, and calling it with 'q' tells the function which key was pressed.
so we search for cursorletter( by itself, and if we search up instead of down we find this code sooner:
```
function cursorletter(c) {
var gc = getcursor();
setcursor(parseInt(gc[0]), parseInt(gc[1]), c);
setcursor(parseInt(gc[0]) + 1, parseInt(gc[1]), "");
}
```
if we both know javascript basics, we know "c" is the input variable from the key, and the only line of this code which i wrote years ago that has a "c" variable is:
```
setcursor(parseInt(gc[0]), parseInt(gc[1]), c);
```
as the author i have at least some idea what this does, it "sets" the cursor at the row and column of the text in that input/output control. the cursor itself is just text, so it actually inserts the key itself (c) and places the cursor after it. we can find setcursor( the same way we cursorletter( but im going to suggest you search for "function setcursor" this time:
```
function setcursor(cpos, lpos, c) {
```
sadly this one is giant, so im going to do the work for you. the id for the input/output control is called dleft and is modified by the function by changing the value of dleft.value:
```
dleft.value=buf;
```
so now we just have to find "dleft with that quote included- and im going to strip the angle brackets to show you the rest of the line:
```
textarea id="dleft" style="position: absolute; top: 38%; left: 1%; width: 38%; height: 58%;"
```
basically, you create a textarea control, you can style it before and/or after the script runs, you give it an id (i chose dleft) and you interact with it by getting or setting the contents of dleft.value with your script. this is slightly different from doing that with the innerHTML element of a span, but its the same concept.
many other things are the same, except loops. in a typical shell script, looping happens indefinitely if you ask it to. with javascript, the page keeps waiting for input events regardless of any loops (its a feature of the browser itself) and if you managed to create a loop that kept going or took too long, the page wouldnt load at all- it would just get stuck.
if your page is interactive and keeps doing things while waiting for input (like a game) you might prefer to use a library (with all the usual caveats) or you would have to use timer intervals to trigger the code that would run in an ongoing loop on a more "traditional" or shell script or basic program, etc.
if that sounds tedious, i think it is. on the other hand, many guis in general are sort of like this. you make the controls, you make code triggered by each control, and any loop you would use like a for loop (javascript has that one) is used to iterate through say, each character of a string (gemwiki does that) or steps or items in an array (lines of text) and things like that. the thing is, while that code is running your page wont be responsive. if it literally takes a minute, the page will seem locked for that amount of time- and many browsers will complain, if only to give users the chance to stop the script.
to say the least, javascript is far from perfect. little guis with limited functionality are trivial to make, while complicated projects can still be fun, but the more you want to do the more you may prefer a language or platform that isnt as wasteful as the browser. but then its your choice.
### taking over an old windows machine
my first pc (not including an 8-bit running cassettes and roms) felt like mine entirely- it wasnt really, it ran microsoft/ibm dos (separate products, largely a branding difference) and proprietary software. much of the freedom was illusory, and some of it bothered me- for the most part though, operating system went on floppy- programs went on floppy- programming environment went on floppy- my own programs went on floppy.
programs were largely self contained (not if they were interpreted, but that was one extra step) and if you didnt like something you deleted it. i hadnt even heard of the internet yet, someone described it as something you access over a modem and accessed as text and i was "how is that different from a bbs?" this is all hilarious now- also i was a kid. and better answers (or a demo) might have helped. at any rate, around 1995 i finally got on the web- but that was years later.
its not that i miss floppies. i actually said when i was younger that it would make more sense to carry our files on chips... you dont have to believe that, and its not like anyone asked me and got the idea from me. it really was the direction the industry was headed though.
i really didnt get the concept of throughput, when usb drives finally arrived my first impression was "great, do they have to use usb though?" id gone from 360k floppies to 1.2 and 1.44m, i thought superdisks were the coolest and by this time i was using a hard drive, but somehow i thought instead of making us switch to a new port and new motherboards we would just magically get faster serial ports (with the same connector, for some reason) or lpt ports. its pretty funny in hindsight. then again its good to be a little sceptical of the industry, even if its better to be informed than to just guess.
i dont miss floppies, and it took years but i dont miss dos either. i was on a mission to have more control of my computer, and for people to have more control of their software. this is skipping back and forth between when i used floppies and when i was only a couple years away from getting rid of windows, because of how closely theyre related.
i liked windows for a while. i really liked when i could have more than one dos prompt on the screen and move them around, switch between them- i liked some gui applications to be sure. i tried visual basic 1.0, and if javascript had even existed yet it might have given me the foundations of understanding what visual basic was about- but i just didnt like it. thats fine though, i was happy to control windows from text programs running on the command line.
you can actually run gui programs from the command line, which is cool. i mean, of course you can, windows didnt care whether you run notepad from control-r or from command.com. but that means if you have a cool enough text program to use instead of the start menu, maybe you can run from a keyboard shortcut instead of running the start menu. because i kind of hated the start menu.
what didnt i like about windows? i mean, its great to have a hard drive but they put so much stuff on it. little self-contained programs, meet cabinet after cabinet after cabinet of nonsense, all needed to make program move somewhere else on screen. it was excessive. and while this was not likely to get fixed, i did have the option to fight it.
i was always curious how much of that stuff was actually needed. my ability to remove bloat from other operating systems is absolutely something i learned on with windows- not just 3rd party applications, but even a stock windows system. i wanted to know what every single file was for. that wasnt going be possible without removing as many files as possible.
you know how it goes, people dont know what therye deleting, they delete something important, "help my computer doesnt work". i mean i could just reinstall windows if i needed to. i wasnt content to even use windows 95/98 until i finally understood cab files and they installer. before windows 95 if i needed to make a boot disk, i could reinstall the whole system without relying on a single plastic platter that was both bootable and contained everything needed to install- and was impossible for me to back up, even with a cd writer. i was not thrilled by that. eventually though i learned enough to get to where i could at least back up my install media? its no wonder i was getting tired of windows though. diskcopy a: b: and youre good. but now...
anyway, i knew what i was deleting. i would delete everything carefully. i made a program that used dir c:\ /b /a /s to list all the files on the system. did you know: that a hidden/system file in a hidden/system folder in another hidden/system folder wont show up with dir /b /a /s?
because i noticed. and i made a program with basic to list every file that showed with dir c:\ /b /a /s and show the total count- and then, it would change the file attributes to unhide everything it found. then it would do it again. it would keep doing this until every file showed up for dir /b /a /s and the total count was the same two runs in a row. now a program that controls every file on the system will actually work.
oh, and you had to be booted into "dos mode" too. because when windows is running its more particular about things. so i would drop to dos mode, run my program get a FULL list of every file on the system, and focus on system files: .exe, .com, .dll, .vxd. you dont want to delete anything you need, AND you want to remove as much as possible? i even followed the toastytech instructions to remove the browser by modifying the cabs and checksums.
but theres a lot of stuff! so i made a program that made it easier to "remove" files without removing them. windows cant use a .dll that isnt named with .dll, it cant use a .vxd that isnt named that way. it cant run an .exe or .com thats called something else either. the extensions are required because only a few of them are executable from windows.
this all started when i read that quartz.dll or quartz32.dll had a vulnerability. well of course i didnt need that dll, right? so i would just remove it. id rename it whatever.lld and reboot and wait for some program to crash. i could always rename it back from dos mode, right?
windows provides windows libraries to windows software. if quartz.dll is needed for showing pdf documents or something, then windows wont be able to open a pdf without it. if the program is important enough, windows wont run at all. but even if windows does need a particular .dll to show a pdf, theres another program somewhere that can show pdfs without the windows library. so i just remove the .dll needed for that, find out what breaks, replace it with something 3rd party and theres one less vulnerability- at least from software included with every copy of windows.
this is great, but its not very efficient. what my program did was list every file- its been more than fifteen years and ive migrated to a new operating system platform twice since then, but it may have focused exclusively on the executable and library and (.vxd are device/driver files?) types, and here is how it worked:
1. list every file, page up/down with pgup and pgdn, up and down with the cursor keys. inkey$ or input$ from qbasic can get more than one byte, you can focus on the byte that tells you if those keys are pressed.
2. to "highlight" the file youre "working" on, simply print the line thats "highlighted" with color 0, 7 instead of 7, 0. each line is limited to a maximum length of 80 characters. i figure this was enough but i was capable of finding any longer filepaths if i needed to deal with those.
3. either enter or spacebar would "disable" the file so windows couldnt use it. how, do you ask?
rot 13 (no, that would be overkill) is a simple alphabet rotating "cipher" that shifts every letter 13 letters. since there are 26 characters in the alphabet, running rot13 once "encodes" and running it a second time "decodes". this isnt a good cipher, its good enough for "hiding" an answer to a riddle or something from someone who wants to wait before they check what the answer was. its good enough for "hiding" a vxd or dll from windows, but we only need to "encode" the extension.
extensions that werent palindromes were simply reversed: windows doesnt use moc, lld or dxv files, so hitting enter on a file would turn com to moc, moc to com, dll to lld, lld to dll, vxd to dxv and dxv to vxd. you could tell at a glance whether a file was disabled or not- i coudlve added colour highlighting to show disabled filetypes, but i didnt really need to.
exe of course, became xex and xex became exe.
depending on how ambitious i was feeling, or how confident i was that something wasnt needed, i would just highlight and "disable" between 5 and 20 system files, reboot windows and see what did and didnt work. sometimes id disable something windows needed to keep the shell running- obviously the thing to do then was go back, enable those files and try others instead. of course i would exercise caution and use what instincts i had about what looked important and what didnt. but this was the experimental part of the process, where i check if things are actually needed or not. if it is, thats fine. just put it back and move on with the list.
but when it worked, oh yeah! id run all the software i had installed if possible, id check to see if it still worked. and if it did, id move the disabled files to another folder and keep that going for a few days. after disabled sat in a separate folder for a week or so (id still do more batches, more files) i figured i could delete them.
i got my windows installation (with gui) down to about 10mb. i was only using it to run some very specific cd burner software to do iso backups, and i wanted to boot windows with a floppy (doable actually) and then use the gui from a directory tree that i would actually remove and restore from backup when i actually needed windows.
but no matter how ridiculous all of that was, it was really fun and it helped me take as much control of my own windows installation as possible, and still run windows. at one point i think i actually killed the ability to run control panel, but i could run the cpl files that control panel makes icons in a single window for. i suppose i didnt think to count those with the dlls or vxds, but theyre not exactly spread throughout the system either.
the command in qbasic that lets you run shell commands is called appropriately, shell. in my language its also called shell. in python its called os.system and in lua its called os.execute. python 3 is weird and not covered here.
### after windows
most of the platforms ive used since windows are more reasonable about this sort of thing- for one, i had less interest in every individual file because my first priority was to experience as much of this new software as possible. but also, when youre using projects that are part of an operating system where most of it is freely licensed, people collaborate on packaging more and things are organised and arranged in a way that is- or at least, was- less difficult to sort out and find out what actually needs what.
instead of guessing what a .dll was for or doing a search to see someone elses information (which isnt always accurate, databases about proprietary libraries can be incomplete or wrong) i could just look it up, usually with the package system.
for years, this was reassuring- until projects became more consolidated by sponsors and large corporations who wanted to help, or interfere, or help, and interfere. i spent so much time trying to get away from microsoft, everyone decided to develop their software on github and then microsoft bought them. many thousands, if not hundreds of thousands of projects then left github, but today practically every programming language, every library, theyre all github based now. they all use libffi more or less, libffi is based there. the ones that use gcc, gdb or gnu libc? those are hosted by ibm. this is exactly what i wanted to move away from. you cant call it freedom with a monopoly, because its a contradiction in terms. if they have a monopoly, no project is free.
but this was supposed to be about when i first moved away from windows, before i even learned python. and i was still using basic, often with emulation, and trying to move projects i made for windows to equivalents or where possible, improvements on this new platform. and bash is very popular, i mean i dont like it- i liked that dos was so simple and i also liked that bash was so powerful, and what i really wanted was something more in the middle of them. that didnt seem to be available, so i settled for learning to cope with bash.
its not that i hate unix per se, i actually think its pretty cool. bash, its authors will tell you again and again, is "not unix". and thats fine, as i just mentioned its actually ibm now, so it really ought to be called ibm bash. its typically compiled with ibm gcc, the main alternative to microsoft llvm. i really, really hate these companies- ive wanted a divorce for decades, and they just keep insinuating themselves back into our lives. never, ever trust a company who have spent the past few decades aggressively making it literally impossible for you to get away. everyone should be warning everyone else, and the fsf is spineless and worthless about this- to the point that their mission is forfeit, the fsf is a complete and pointless joke. the only thing they "fight" for is you funding them. i dislike osi even more.
but unix is actually pretty charming. its completely unfair you know, to judge "unix" by what bash is like. i actually like ksh i found out a few years ago, its like bash without all the coddling. i dont mean im not being coddled, i mean i had to constantly hold bashs hand through everything i wanted it to do, constantly smoothing out a million gotchas that invariably turn up between some combination of too many features for each "utility" and too many features for the shell that holds them together.
the thing about a programming language de jure (not to be confused with the programming language du jour, thats another rant entirely and i certainly dont mean my own, because to be the anything "du jour" in this industry, its got to be popular) as opposed to a shell language, is that if you have too many "gotchas" to the way a string works, youre in trouble- youve designed it wrong.
a shell language is a different beast, and gotchas are how the whole thing works. its not that im against this idea, what im against is the excess. the more excessive a shell language gets, the harder it is to use. if im using a programming language, and it has to many features, the main concern is whether each feature is going to be maintained- dont like the feature? dont use it.
shell language features change the way each line of a script is parsed and debugged, and every bit of additional complexity is a problem you can run into whether you intended to use that feature or not.
and i hated bash. i didnt even know how much i hated it, until i finally liked something else. i dont care about zsh or fish or any of the modern, fancy shells. i didnt even have to learn ksh, i already knew it when i started using it. but what i didnt know is how much less finnicky bash could be. i just took it for granted that if i wanted to use the shell, i would have to massage every line of script for 30 minutes to get it to stop doing things i didnt want. i really, really incredibly loathe bash- i wish i could chase off everyone who promotes it before they cause anyone else years of pointless trouble.
except for this: you know (maybe you know) how if you put commands in between $() like this: filecount="$(ls | wc -l)" you can just set $filecount to whatever the output of that command is? i mean thats not specific to bash, ksh does it too. but SIMILARLY, <() does that except it creates a file interface. and THATS pretty awesome. but not enough that its worth using bash, id MUCH rather use a shell that isnt horrible and lets me run something the way intended on the first, second or fifth try.
also whoever keeps gnu which on github is a traitor to all users, use this instead- its 0-clause bsd licensed like this article:
```
#!/bin/sh
for each in $(echo $PATH | tr ":" "\n") ; do ls $each/$1 2> /dev/null && break ; done
```
but do i hate the shell? far from it. that line i mentioned before? this one:
```
outfile.write((each + nl).encode('utf-8'))
```
as i said, thats from an editor that lets me run shell commands. and unlike shell or os.system, it calls them using os.popen- which is io.popen in lua.
the difference is that instead of just running the commands, it pipes the output to an array that the program which called it can use, instead of just letting the term window handle it.
os.popen takes the output of the shell and puts it into the editor itself, which jed is also capable of doing along with emacs and surely, vim. only with my editor you only have to hit ctrl-t at whatever line youre on. (i dont know how its done in most editors that can run shell commands, ive tried it with jed before).
i literally just opened xterm running ksh, to open my editor with the source code for my lua translator to check which command lua uses. i have nothing (or practically nothing) against the shell itself.
but it was during that time, when i was doing countless command line related tasks (as if i ever went pure gui on any platform before, including macos) that i was trying to learn what i could about shell scripts. in dos theyre called batch files, which is funny. the shell goes back to timesharing systems (which unix is far from the earliest example of- dtss was much simpler, and is older) because they were designed to let various people interact with the system at once. "batch" computing as far as i know is an even older term, so its amusing that unix is older than dos and still uses terminology thats more "hip" than what dos uses.
### how shell scripts are different
the two ways programming languages are typically implemented are as a compiler, or interpreter. a compiler will translate the entire program, then instead of using the compiler to run the code, the compiled code will run without it. an interpreter doesnt necessarily translate the whole program at once, and it also runs the code. but this means that the code wont run without the interpreter.
if youve ever "compiled" a python script, what youre actually doing is typically glueing it together with a copy of the interpreter. this is still an interpreted script. if you compile a c program however, the output will run without the c compiler.
interpreted programs are usually slower, but they have their advantages. if you have a very large python program and change one line, you can run the entire thing without recompiling it first. ive edited a very large c++ program before, which took 5 minutes to compile every time i wanted to test a change to it. one of the main things i was trying to do was remove what i could (the program was not very modular) so that it would compile faster and i could test changes more efficiently. but the program itself would be more slow if it were interpreted.
shell programs are interpreted, but the differences dont end there. most compilers and interpreters either work with commands that are native to the environment- which is to say, included. or they work with commands that are imported from a library, which is similar except they arent included by default.
you might be thinking of the exception to this, which is that many compiled and interpreted languages can also call shell commands. thats true of course, except that 1. this is in many ways an exception to the way the language works, its not designed around this feature and 2. usually the language that calls shell commands doesnt actually parse the shell commands itself, it simply feeds them to whatever the default shell is. in other words, its not REALLY calling the commands, its actually feeding them to another interpreter- another shell, which is separate from the language calling it.
the shell itself does things a little differently. its an interpreter, but instead of working primilarily with commands that are built in (it may have those too) its primary job is to work with commands that are external to, and separate from the shell.
one example of many programs that the shell works with is the python interpreter itself. if i run the code i use to replace gnu which, changing "$1" to "pypy2" like this:
```
for each in $(echo $PATH | tr ":" "\n") ; do ls $each/pypy2 2> /dev/null && break ; done
```
thats shell code to search my $PATH for a program. the path is a list of folders or "filepaths" to automatically search when you run a program. $1 lets you put the shell code in a file and run it like this:
```
which pypy2
```
but for demonstration purposes we can also replace $1 manually. running the above code returns the folder that pypy2 runs from: /usr/bin
on an openbsd machine it really should be /usr/local/bin, if we follow a good rule all the time. i usually do, none of my programs are designed to add themselves to /usr/bin- but that doesnt mean ive never put one there.
if you dont get the path thing, dont worry about it too much. its enough to know that which is used to tell you where the file youre running is located.
and the point is that pypy2 is a completely separate program from the shell. it isnt a shell script, its a python interpreter and on this system, a 64-bit elf binary executable.
other binary executables (they arent shell scripts) include the web browser, the program that manages windows on the screen, and the "ls" and "find" commands.
ls is a program that gives simple or detailed information about a list of files. it can give information about a single file or even a group of folders, but its designed primarily to give information about one folder at a time.
find is a program that can be used to list files from a single folder, but is designed primarily to search more than one folder and list or run commands on each file.
both of these programs can be called from the shell, but the shell is a separate program:
```
$ which ksh
/bin/ksh
$ which ls
/bin/ls
$ which find
/usr/bin/find
$ which which
/usr/bin/which
```
ksh is the shell interpreter located in the /bin folder, and ls is the program in the same folder that is run when you use the find command.
so while python can call a separate shell that calls separate programs like ls or find, the shell is designed to interface with those commands directly.
complicating this is that python can probably use fork() or exec() to call these programs directly too, but i prefer and was always accustomed to the shell having all the features of the shell at its disposal.
if this is too handwavey, and it probably is, whats really wild is that typically a programming language will get its command syntax from the interpreter or compiler or library, while part of the command syntax for the shell comes from the program itself.
what i mean by this is, if you have a python program using native commands, the syntax and arguments for calling those commands comes from python itself. if theyre from a library, the arguments come from the library but the syntax to call them comes from python.
and to a degree, the shell does have its own syntax- otherwise it wouldnt know where one call stops and the next one starts.
but the arguments and syntax passed to ls come from the ls binary itself, the arguments and syntax for find come from that program, and if you write your own python script using sys.argv() to get command line arguments, then its YOUR python script that determines the syntax for calling it from the shell, its not the shell that decides.
provided you dont break any of the basic rules of the shell, that is.
a basic example of this is that when calling ls, you can say ls -lrS as a command argument. the shell doesnt know what that is, it just hands it over to ls to figure it out. find can be called to run other programs using -exec, almost the same way that python can call shell commands like this:
```
find . -type f -exec wc -c {} +
```
that will return the bytecount (or filesize) of each file located by the find program. the -exec {} + stuff is syntax from the find command itself, its not from the shell.
on top of this, remember that the shell can do things like:
```
p="$(find . | tail)" ; echo "$p"
```
and that will first run the command find . and send the output of that command to the tail command, which by default only returns the last 10 lines of whatever you send to it. if find only has 5 lines of output because youre running it from a folder with few files and folders, tail will show them all. if it has hundreds of listings, find will only show 10. you can run tail with an argument to return more lines, or fewer.
but then because you used $() it will take the output of tail and set $p to THAT, so that now you have a variable called p which you can reference anywhere else from that shell session.
this concept is called shell substitution, though i pretty much just call it dollar parentheses because thats how its written- and its an example of how strings can be very different for the shell than strings in other languages.
you see, you might think that if you say echo "hello world" its going to echo hello world:
```
echo "hello world"
```
it will just say hello world after that.
from there you might think that echo "$(find . | tail)" will just say $(find . | tail) after you run it. but it doesnt, it runs programs and gives their output.
theres a version of this in python too, called string substitution. but because string substitution is based on string formatting, rather than running a shell, its a lot easier to avoid. in the nearly 10 years that ive been writing fig programs- which run python under the bonnet, ive never once accidentally triggered string formatting in python. not once.
the shell has more rules, and while i dont mean to discourage you from using it (ksh is pretty predictable, thats why i like it so much- lots of people who consider themselves "non-coders" manage to use bash too) its honestly pretty annoying to have to learn as many things as ive had to, just to avoid unwanted side-effects from whats intended to be a plain string- or at least, a simpler shell command.
so why do use the shell at all? why do i use the shell FROM python scripts, when its considered better practice to use native python features over shell calls whenever possible?
because the shell can still do in one line what can actually be relatively tedious to bother with from python. id much rather call the find command than implement it using python, same for sha256- which ive done with python before.
the shell is actually good at what its good at.
on the other hand, if youre handling a lot of string data, the shell is also good at SOME of that, and for the rest of it python can be a lot nicer. if you use different tools, you get a better idea what one is good at and what another is better at, and vice versa. i do like python scripts more than shell scripts for most things. but sometimes the shell is just too good of a shortcut for part or even all of a task.
plus, as was mentioned before- python calling the shell isnt a one-way thing. the shell can also run python scripts and even string them together, taking the output of one and feeding it directly to the input of another python script, or even another shell program that isnt even written in python.
both python and the shell can be used to glue together code from different languages and different authors. but the way the shell does this is VERY simple, and really cool. the way i designed my language is heavily inspired by the way the unix shell does pipelines:
```
echo "shell code" | tr a-z A-Z
```
```
now "program code" | ucase | print
```
the | ...which is called a vertical bar, is optional in the second example but most commands are stuck together the same way.
in python you would say now = "program code" ; now = now.upper() ; print now ...or more likely you would say:
```
print("program code".upper())
```
im not trying to knock python as a language, many people consider it desirable to be able to use parentheses this way- basic works this way, lisp does, even shell substitution works this way. but if youre trying to make a language that makes things as easy for people new to coding as possible, i think in some ways a command pipeline is friendlier. languages that chain together functions like .lower().split() do this too, and that works in python- for commands that allow it.
as you can see from the which replacement, shell code also has loops that can iterate through items. it can also use this to loop through a numeric range, or it simply loop repeatedly. loop syntax from the shell isnt as nice as i think it is in some programming languages, but even the shell has nicer loop syntax than javascript, or c.
license: 0-clause bsd
```
# 2018, 2019, 2020, 2021, 2022, 2023, 2024
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
```
=> https://freesoftwareresistance.neocities.org