Bucking horse, this is functional programming today

Long and winding has been the path that takes us to the finish line. We have contrasted an imperative language specification with a declarative one, understood optimal symbolic computational strategies, and intuited the promising future of automatic theorem verification (and subtly scratched on automatic theorem proof).

So that?. As enjoyable and relaxing as the reading has been (laughs) and as much as mathematicians say that there is no need for something to be useful to dry your brain thinking about it, what really matters to us programmers is the practical application of the case .

The final corollary on which we will ruminate in this post, is whether (as it seems) functional programming is going to play, in the short / medium term, an important role in software development (in general) and if it is worth dedicating our precious time to mistreat our neurons, when they could be composing, quietly, symphonies with PHP. You dare?.

Table of Contents

## What is what?

When we contrasted imperative with declarative programming, I deliberately used the concept *“Context of a language”* to show that the *“Declarative”* of a language is subjective (there is no definition free of ambiguities) and that therefore a language is declarative, insofar as it abstracts, absorbs, assimilates, … the essence of certain processes and **frees us from them**.

The *“Declarative context”* from SQL (which I put as an example) frees us from having to think and optimize the processes related to the efficient treatment of large volumes of information between many different sources (hashes, trees, buffers, query execution plans, etc …). What it does not liberate us from is knowing what data and how we should merge them (make Cartesian product of the tables *BUSINESS* Y *EMPLOYEE*).

The *“Declarative context”* of an imperative language (simplifying it a lot) frees us from having to think about the best way to transfer the structures: **if**, **while**, **for**, … to the binary code of a machine. From what does not free us, **and this is the question**, is to continue thinking in terms of the instructions supplied by the machine.

Thus, between SQL and (for example) C, there is a very important qualitative difference, and that is that the first has managed to establish a framework **which is independent of the machine**, while the second has only transferred it.

Therefore, although there is no formal definition (unambiguous and universally accepted) of what a declarative language is and that there are many nuances, it does not seem reasonable to argue that **C** it is a declarative language.

Having said the above, we are going to try to think about: what is a language that adopts the functional paradigm. In the end we will see that although many languages can be labeled as functional, only when they meet the requirement of *“Be always and at all times”* functional, they can proudly wear the label of *“Functional language”*.

## The machine prevents theoretical development

All imperative languages such as C have emerged to facilitate the transcription of algorithms designed for a machine that has specific physical characteristics. If the machine instead of being a processor of which we know it had been another machine with different physical principles, it is very unlikely that C (and the like) would have been designed the same. It seems reasonable to think that the more different the machines are, the more different the (imperative) languages designed to program them will be.

Thus, we see that most of the algorithms have been designed with this type of machine in mind (*“Imperatives”*). Throughout history, strategies have been refined when writing programs, improving step by step the *“Rude strategies”* (eg. a **for** or a global function) by more refined ones (eg. a **foreach** or a private method).

**The problem** is that this refinement has generated a monstrous amount of software development structures (singletones, dynamic typing, closures, hashes, polymorphism, reflection, static / dynamic members, unions, injection of dependencies, etc …) with which to perform a proper formalization, and therefore understanding their intimate relationships, becomes completely impossible, preventing (due to the complexity of the case) finding **complete solutions** (God’s solutions). Let’s say that the disorder is such that putting order becomes a titanic goal.

It is therefore clear that current software development takes its roots from structures that have been designed for the machine and not for algorithms (given a problem, we do not seek the best theoretical solution, but rather the one that we can apply to our practice in practice. machine).

## Declarative programming

Many languages that can be considered declarative (eg. LaTeX) have arisen spontaneously when trying to abstract the basic tasks they perform, however, it is only with languages like SQL It is appreciated how fruitful it is to establish a formal framework that allows understanding the intimate relationships of language structures (eg. in SQL to be able to generate isomorphic query plans [equivalentes] but more efficient).

There are a few models that formalize the concept of computing, the most famous I suppose should be the Turing machine, however, the Turing machine is still a *“Imperative processor”* very simple (a finite state machine) that, although it allows us to perfectly understand the way in which the information is processed, it does not contribute absolutely anything about the idea that underlies the algorithm.

That is to say, within all the computational models that we can find, it seems to be (until we have a formalization we cannot assure anything, of course) that we should focus on those that assimilate, absorb, etc … to a greater extent the abstract concept of computation.

Although it is not easy for me, I will try to give an example: a finite state machine (like Turing’s) goes *“jumping”* of a state **TO** to another **B** following a very simple and well-established procedure, for example, if we look at the following graph of a very simple state machine:

As we can see, the machine starts in the state **S _{1}**, if from the input you get a

**1**still in the state

**S**, but if you get a

_{1}**0**then go to state

**S**and will not return to

_{two}**S**until another is obtained

_{1}**0**and so on.

With state machines like the one described, all kinds of algorithms can be represented (eg. The game **Crysis 3**) but there’s no **nothing** in the state machine that provides information about what the algorithm it implements means, it just executes it, nothing more. To study the algorithm, to understand its relationships, we must access the *“Pseudocode”* with which someone generated that state machine. That is why it is so difficult in practice to reverse engineer compiled programs, you can study the code of a virus, a specific part of software protection, etc … but it requires a lot of effort and, in general, it is not possible recover the intentions of whoever programmed it, but only know what the code does (from which state to which state it jumps).

However, declarative programming tries to absorb the essence of the processes. Continuing with the example, suppose that a declarative language says that the functions **F (x)** Y **G (y)** are such that their composition is commutative, that is, how much **F (G (z))** to do **G (F (z))**. In that case, we have information that helps us **for any function** and it will not matter who and how they program it, we can *play* with that information.

What is the use of knowing that? Suppose we have to make a program that computes **F (G (z))**, but it turns out that computing **G** takes a very important effort (eg. 30 seconds), on the other hand, computing **F** it is very fast and efficient and, in addition, **F** It tells us in 90% of the cases that the final computation will fail and in 10% that it will not fail. Since we know that the process is commutative, we can first compute **F** and only in 10% of the cases will we compute **G**, With a time saving of 90%! But the most important thing is that this works **for any pair of functions that satisfy commutativity** and therefore our compiler / interpreter / machine will be able to perform this very important optimization **in any program**.

Is my example impractical? Suppose we have a function called **CheckLength (s)** Given a string of characters, it returns the empty string if its length is greater than 5 Kbytes, but it returns the same string if it is less than or equal to 5 Kbytes. We have another function **ComputeHash (s)** at a very high cost, which takes each word in the chain and searches a database for specific sequences (given a condition) and which are the same length as the same word. In these conditions,**CheckLength (ComputeHash (s))** It is equivalent to **ComputeHash (CheckLength (s))**!, Which compilers you know of would be able to toggle inefficient code **CheckLength (ComputeHash (s))** by the tremendously efficient **ComputeHash (CheckLength (s))**?

As in the previous articles in the series, we once again intuit the tremendous qualitative advance that a formalization of what computing means (algorithm, etc …) means compared to quantitative advances such as low-level optimizations or ingenious structures without intimate relationship with the others ( eg. dependency injection).

## Functional programming, functional languages

The functional paradigm aims to apply mathematical thinking to the problem of algorithm design (and software development by extension). To do this, instead of thinking in terms of the physical machine, it begins by asking the most basic: what is a function? It was Alonzo Church and Stephen Kleene, in the 1930s, who started the *Lambda calculation* independently of any specific physical machine, analyzing only the properties of its objects of study (the functions).

Functional programming establishes some properties that the elements of the language must fulfill (in a similar way to how I established commutativity in the functions of the previous example), with these properties, interesting general results are built and obtained.

Will these properties be the desirable ones? Will these properties allow a formalization **useful in practice** and not only in the theory of computation? **we do not know**.

*“How?”*, you will be telling yourself, *“After so much promise, with capabilities infinitely superior to imperative programming, it turns out that we don’t know where all this will stop?”*.

I’m more sorry, but I can’t be sure. In the history of mathematics (and in physics as well), it is very common, especially in young areas, that several theories arise, that some are forgotten in favor of others that seem promising and, with time, one of those discarded.

Functional programming, as it is currently taking, is only one of …