writings on math, logic, philosophy and art

programming articles

Human technology: Text files

It is a well-known engineering principle, that you should always use the weakest technology capable of solving your problem - the weakest technology is likely the cheapest, easiest to maintain, extend or replace and there are no sane arguments for using anything else.

The main problem with this principle is marketing - few people would sell you a 10$ product that can solve your problem for ever, when they can sell you a 1000$ product, with 10$ per month maintenance cost, that will become obsolete after 10 years. If you listen to the “experts” you would likely end up not with the simplest, but with the most advanced technology.

And with software the situation is particularly bad, because the simplest technologies often cost zero, and so they have zero marketing budget. And since nobody would be benefiting from convincing you to use something that does not cost anything, nobody is actively selling those. In this post, I will try to fill that gap by reviewing some technologies for web publishing that are based on plain text and putting forward their benefits. Read on to understand why and how you should write everything you write in plain text files and self-publish them on your own website.

Read More

Structured programming: how to write proper if statements

The if statement (or if expression) is the cornerstone of every modern programming language - it is so pervasive that we rarely think about how exactly should we use it, or how it is meant to be used. But despite its popularity, if wasn’t always there, and it wasn’t as pervasive as nowadays, so its role is, I’d argue, still somewhat misunderstood. So in this article, I will examine some mistakes that we can easily avoid in order to improve on our code.

Read More

Why is functional programming awesome

I really am not the best person to author such an article (I am not that into programming anymore, and I never was a real expert in it), however I am doing it, because I have been waiting for someone else to write it for years and kept noticing the following phenomena:

  1. People who understand functional programming, cannot make themselves understood by the general (programming) public.
  2. Many of the people who are able to make themselves understood by the public, don’t understand enough for them to be worth listening (all functional programming articles that are understandable don’t go much farther than “You should use pure functions, man!”.

Roughly the same thing has been called “the curse of the monad” by some people: “Once you understand it, you lose the ability to explain it”. It is clear now that monads are not something you get in an afternoon, but I think you can get some idea of what FP (functional programming) is. Or, you know, in a year or so. But in order to spend that time you need some motivation. You probably need someone to tell you why exactly do you need to know about FP? Why is it awesome, so to say. And so my article begins.

Read More

all articles

programming shorts

A small Haskell task

Haskell is great. And I want more people to know it, so this is just a quick overview of it’s capabilities, using the code to solve a simple task I saw on Mastodon.

The task is the following:

Return a list of all combinations (i.e. order doesn’t matter) of the given length. Example: Given “abc” and 2 the answer is [“ab”,”ac”,”bc”] but order doesn’t matter at either level.

It’s solution begins with a type signature, which in our case is the following:

combinations :: Show a => [a] -> Int -> [[a]]

A short guide to type signatures

Here are several specifics of Haskell type signatures:

Firstly, you’d notice they go on a separate line from the implementation e.g. instead of:

foo (a : Int) : Int -> a + 1

we do:

foo :: Int -> Int
foo a = a + 1

Secondly, there is no syntactic difference the parameter of the function and the return types i.e. instead of

combinations :: ([a], Int) -> [[a]]

(i.e. combinations accepts a list and an integer and returns a list of lists.)

We usually write

combinations :: [a] -> Int -> [[a]]

(i.e. combinations accepts a list and returns a function that accepts an integer and returns a list of lists.)

You would realize that this isn’t such a huge deal, once you understand that you can easily convert the former to the latter with the following higher-order function:

uncurry :: (a -> b -> c) -> (a, b) -> c
uncurry f = \(a, b) -> (f a) b

Oh, and I forgot one thing — a, b and c are all abstract types, but you don’t have to write stuff like

uncurry <a, b, c> :: (a -> b -> c) -> (a, b) -> c

Haskell figures that out by itself.

So, let’s continue with our task, namely:

Return a list of all combinations (i.e. order doesn’t matter) of the given length. Example: Given “abc” and 2 the answer is [“ab”,”ac”,”bc”] but order doesn’t matter at either level.

You may think you need sets to solve this. Not true! It is enough to use recursion.

A short guide to recursion

Haskell uses recursion to traverse lists, which you might think is more complex than traditional approach, but is actually quite simple. e.g. instead of doing this to sum the elements of a list:

function sumList(list) {
  let total = 0;
  for (let i = 0; i < list.length; i++) {
    total += list[i];
  }
  return total;
}

We do this:

sumList xs = case xs of
    []     -> 0
    (x:xs) -> x + sumList xs

Or in English

“The sum of the elements of an empty lists is zero.” “The sum of the elements of a lists is the first element, combined with the sum of the rest of the elements”

(“x:xs” is a pattern match, where “x” is matched by the first element of the list (known as “head”) and “xs” — by the rest of the elements)

The definition is simple, as lists themselves are a recursive (or an inductinve data type, as it is sometimes called). Here is how would you define the list data type in Haskell:

data [a] where
    []  :: List a                   
    (:) :: a -> List a -> List a

Or in short:

data [a] = [] | a : [a]

By the way, this is the actual definition of lists in Haskell (in Haskell, the list construction operator : is a function, like all other operators.

If you understand these definitions, you would realize that list traversal functions like sum use the same pattern as the definition of the list data structure itself.

Base cases

So, back to our task (again).

In a recursive definition, we clear out the base cases first. In our case there are two of them, as we have two values that we operate on: one list (of letters) and one number (the length of the sequence).

We know that there are no combinations of an empty list of letters (we don’t care what the length is, so we use the underscore).

combinations [] _ = []

What about combinations of length 0? You might think that this is also empty, but there is one such combination, namely the empty string (it is important to return it, else the whole thing won’t work):

combinations _ 0 = [[]]

You’d notice that we write those two definitions separately. That’s because…

A short guide to pattern matching

…because Haskell supports multiple functional definitions, that use different pattern matches, so instead of:

sumList xs = case xs of
    []     -> 0
    (x:xs) -> x + sumList xs

we can write:

sumList []     = 0               
sumList (x:xs) = x + sumList xs

More, the language would detect if we missed something, making it impossible for a runtime error to occur.

Actual Solution

And now let’s finally solve this task.

Excluding the base cases, the solution is:

combinations (letter : letters) n = 
  combinations letters n
    ++ map (letter :) (combinations letters (n -1))

But what does that mean? Let’s break it down:

combinations (letter : letters) n = 

We take the first letter and the rest of the letters (we are guaranteed that the list is not empty, because we already handled the base case) and we take the length, so, if we use the example data, we would have:

letter = 'a'
letters = ['b', 'c']
length = 3

Now, a combination of letters can either include this letter, like ["ab","ac"], or does not include it, like ["bc"].

We find the combinations that don’t include the letter, letter by calling combinations with the rest of the letters:

  combinations letters n

Now, the most complicated case is finding the combinations that do include the letter. To find them, we take the combinations that do not include it, with length - 1 and then prepend the letter to each of them.

    ++ map (letter :) (combinations letters (n -1))

So, for our data, this be map ('a':) (combinations ['b', 'c'] 1) which would generate ["ab","ac"].

You should know enough to understand what this code does: we use the higher-order function map to prepend the letter, and we feed it the function letter :, which is short (curried) for \a -> letter : a.

Conclusion

We verify that our code leads to the base cases, subtracting 1 from n and taking letters from the letters array, until both are 0.

If we reach the first base case:

combinations [] _ = []

This means our solution is not valid (we ran out of letters and we still don’t have a combination). In this case, the expression would evaluate to map (letter :) [] which would evaluate to [].

If we reach the second one (where the length is zero)

combinations _ 0 = [[]]

In this case, we would have reached a valid solution. The previous clauses would prepend all letters which make up the combination.

Agda

At the Agda headquaters:

“OK, guys, so our user pool consists only of folks who already know Haskell and Emacs Is there a way to narrow it down more?”

“I got it, what if we allow unicode, so they also have to also know Latex ?”

“Brilliant!”

Haskell

“Oh, in Haskell this app is just a one-liner!”

The one-liner:

app a b c d e= runMonadT <(@)> ( (LiftValue a b c) *!* stop) . \a -> d(a) $ e

Bash

I think that bash is so popular because it is so terrible language and hard to work with, that every time you make something work you feel like a wizard and your dopamine is to the roof

Unix and Lambda

Searching for a good way to say it, but basically, those who don’t understand Unix indeed are doomed to reinvent it. However, those who don’t understand Lambda are doomed to just always remain half-blind when it comes to programming, just roaming and not knowing what they are missing.

Timezones

“May you become an expert in time zones!” Ancient programmers curse

We love opensource

Companies’ pages:

“We love open source!”

Companies’ Github profiles:

“5 followers”

Data

People like to be reminded:

“data never speak for themselves”

But of course it doesn’t, after all, the objects of the dataset, the phenomena being investigated etc. are all defined using some theory, never forget that i.e. you have the theories even before you have the data (how can it speak of itself then?)

Deep work in digital setting

I arranged my desktops, inspired by the workspace organization method in “Deep Work”: to entryway, preparation area and core workspace.

The idea: have 3 virtual desktops, or three computers each of which corresponds to a level of focus in your work.

Desktop 1 - Entryway - can serve as an area for transitioning from the outside world to your work environment. Here we should store all distracting windows (social media, porn etc.), as well as administrative stuff, like issue trackers.

Desktop 2 - Preparation Area: Should be arranged to prepare your mind for deep work. It might include some relevant resources (books, specialized message boards, inspirational galleries), as well as some experimental projects.

Desktop 3 - Core Workspace: This space should be dedicated solely to deep work. It should be free from any potential distractions and have just the tools you use (consoles, IDE’s drawing Software, you know what you use)

The idea is to only go from one desktop to the one next to it, without fast context switches.

Old tech

Most people, after owning a laptop, or another piece of tech for more than 2 years:

“Yeah, this things is getting outdated, doesn’t support XYZ, I guess it’s time for a replacement…”

Me, after my 10-year laptop fails:

“THIS IS BULLSHIT!”

Critical and non-critical functionalities

Engineering protip: Know the difference between critical and non-critical functionalities of your product, e. g. between why your product MUST do and why it SHOULD e.g. for cars engines are critical, stereos are non-critical.

Critical parts of the product should be designed to be simple and durable. Non-critical parts should be cheap and easy to replace and remove. The failure of non-critical stuff should never affect critical stuff.

Critical parts are your “core business”, as business people say — they give you competitive advantage.

On dependent types

Dependent types are super confusing, but at the same time they feel easier than normal ones.

I guess that makes sense for every advanced concept - as a concept wouldn’t exist if it isn’t making things easier when you finally get it.

I think that they are hard to learn because we consider them as something fancy, but they actually are the simpler thing, and it’s normal, first-order types that are weird.

With first-order types, you have two things — types and values, dependent types there is only one thing, type e.g. “Number” is a type of “Set” in the same way that 1 is a type of “Number”.

On federated networks

If done right, a federated network like Mastodon would be much more reliable and cheap to run than a centralized one: when the scale is big, the network has to be federated under the hood anyways (i.e. you have to have several caches in different locations, several database nodes that copy data from one to the other, several moderation teams etc.) and so the network that is federated by design has many advantages over the one that pretends to be centralized.

1 + 1 and programming paradigms

You can understand a log about different programming paradigms from the way in which they implement 1 + 1:

  • In imperative languages, such as C, it is just 1 + 1 - arithmetics is build in.
  • In object-oriented languages like Smalltalk and Ruby, it is 1.+(1) - plus is a method of the object 1.
  • In functional languages, like Haskell, it is actually +(1, 1) - plus is an inflix operator, which is actually a function. In Lisp you just cannot write (+ 1 1) directly.

Hard things in computer science

Finally got the final list of the three hard things in computer science:

  1. Naming artifacts
  2. Concurrent
  3. Off by one errors. data processing

all shorts

all topics

Subscribe for updates

Powered by Buttondown.

Support the site