Would you be able to guess which image represents each paradigm?
In general, that the future of programming will be (almost certainly) declarative programming, is something that can hardly be denied. Only the efficient management of the flow of events in a simple desktop application can give some other headache and if our processes consume some processor, make it efficient for various conditions (few or many CPU, memory, …) requires patience and talent (a good paralysis is not “Put a task in each core”), we season the plate with synchronized data flows through the network and disk and we already have the “Eater” perfect.
It seems that imperative programming is becoming more and more expensive, while declarative programming he attracts us more and more with his siren songs. If you feel like it, let’s think about it in our “Imperative vs declarative programming” particular.
The costs of imperative programming
For a long time I have wondered how it is possible that in my 8088 to 12 MHz with 512Kbytes from RAM the applications ran wonderfully (there were very good applications, text editors, office suites more than acceptable and it was that time [hasta el 386] my golden age of video games). Today it is clear that computers do much more things (eg the video memory they have to manage is now 1920 × 1080 × 32/640 × 200 × 2 = 259.2 times more than before), but the computing power is it has multiplied many more times; For example, the quality of architectures is now much better (at the same clock speed, current processors are tens and hundreds of times more efficient), they have 8 times larger word sizes (from 8bits to 64bits), compilers generate code much more optimized on more sophisticated algorithms (which did not exist before) and even without taking all this into account, now the amount of processing is 2 × 2500 MHz / 1 × 12MHz = 417 times more, but it turns out that the CPU does not render, as there is another GPU just for that!. Why then is my computer (to see a simple web page!) Not going lightning fast? Where is my computer’s computing power going?
We have the answer in “Combined complexity”, said quickly it would be like: “The resulting total complexity is much greater than the sum of the complexity of the parts”. For example, if we have to make a program, we usually divide roughly the total functionality of the program into small parts that we solve individually. Thus, we efficiently solve the small parts, but when combining them, we obtain inefficient solutions.
On the one hand, we are not capable of simultaneously covering a global but precise vision of the system and, on the other, the compiler cannot help us to solve this problem. Thus, any (moderately complex) problem that we are not able to solve, the imperative programming neither will it.
I am going to give an example, suppose we have the following functions in C What would the compiler do:
We are going to break down the obvious strategy and the optimal strategy that the compiler has achieved (GCC 4.7.1). First the resulting assembler:
Let us begin:
- mod4 is resolved “Foolishly” performing a division, but someone has told the compiler that the remainder of dividing by 4 in base 2, are precisely the two least significant bits. It is a lot cheaper (in terms of processor time) to do an operation and than a division.
- div4, similarly, someone has told the compiler that dividing by 4 in base two is to shift the bits to the right ignoring the least significant 2.
- divX, But what if the divisor has an unknown value? (when compiling), well that cannot be optimized.
- divX4, Wow, someone has told the compiler to expand the functions to see if it can optimize them inline and here it has worked.
- sumDiv5What if we call the non-optimized function a number constant of times ?, again, someone has told the compiler which can unroll the loop, applying the inner body of it inline and performing optimizations when possible (only two real divisions when five have been indicated in the code).
- distributiveWhat’s going on here? Why haven’t you reduced such a simple calculation? You haven’t been able to reduce that code to: “One increment, two multiplications, and a single bit shift ((P * N * (N + 1)) >> 1) “. Admittedly, you have elegantly reduced the multiplication by a series (P, P + P, P + P + P, …). Whoever argues that there may be spillovers, these can be easily predicted (and bifurcated in the worst case). I mean, yes N = 1000 It turns out that 1000 jumps will be made (with their 3000 sums). But it all comes down to 4 simple instructions! We have not been able to solve the problem (that is why we have iterated), but neither has the compiler (it has not been able to reduce our code).
“So, have we solved the problem?”Sorry to say, but no. “What if we accumulate optimizing wisdom?” (We say more things to the compiler), well, because the number of combinations in the resulting strategies grows at a gigantic speed, in fact, only with the code currently written could do many more optimizations than compilers are currently capable of.
There are really three problems that compilers (of imperative languages) have:
- First, compilers (current, of course) are not able to infer (deduce, guess, think, …) an optimization that has not been explicitly previously programmed. Therefore, they can never improve the algorithm indicated by the programmer, in any case, they do it faster (yes, it is an improvement in itself, but the algorithm remains the same), for example, replacing one way of dividing by another . Wow, if a programmer writes a bad algorithm, it will still be bad after compiling (with super-optimized instructions, yes, but bad).
- SecondAlthough the programmer writes an optimal algorithm, compilers currently cannot obtain the minimum set of instructions that correspond to the written program. This is so, because compilers only know how to optimize what they have been explicitly told to optimize.
- Third (and the most important), the code that a programmer writes today, is written in the current context, tomorrow, most likely, it will be obsolete. By obsolete, I mean that tomorrow, if you want that program to take advantage of the new improvements (techniques, algorithms, hardware, …), it will have to be recoded, reprogrammed, and no compiler will do it for you (make no mistake If you have a program that uses a library and the library is optimized, your program runs faster precisely because the library has been recoded, your program is still just as good or bad as before).
In short, nothing compilers currently do serves to minimize the growing “Combined complexity” in our systems. That is, if you want an efficient algorithm, program, system … cure it, but also, if you want your program to be efficient tomorrow, cure it tomorrow again. Also, you can’t expect the compiler to detect your inefficiencies, so if you want efficient code … cure it twice.
NOTE: In four paragraphs it is impossible to analyze and cover all the aspects and counterexamples that can be put, writing an intelligent hierarchy of objects only makes your code last longer, talking about an obvious (and boring) function that is going to be used today and within of two centuries (eg the arithmetic mean) does not mean that the problem that I pose is sustainable indefinitely in the rest of the programs and for all future technologies, etc …
To give one more example, how many of your programs could run without recoding under CUDA? If tomorrow they release the first quantum computer (no, not that one, the real one), do you think your code will be able to run efficiently on it? Intel is studying general-purpose processors with hundreds or thousands of cores. Would your algorithms take advantage of that power? ?, etc…
I think we can say that the phrase that describes declarative programming is famous: “Say what you want, but not how you want it”.
Ultimately therefore, the ultimate goal of declarative programming would be “Pose the problem colloquially (possibly unambiguous) and obtain the solution automatically”. We are not going to be so ambitious and I hope you know how to understand the context (real and current) on which the comparison is made.
Once we have defined the declarative programming, let’s agree that we are not going to twist the language to arrive at absurdities such as: “I tell him in C that I want a loop, not how I want it” and the like. It is true that between imperative and declarative programming there is a soft and long range of grays. At the end of the day, I try to stick to practical situations and the declarative examples that I will give below are limited by the current technological (and theoretical) capacity.
In practice, whatever language it is, a language is imperative with respect to problems that it cannot solve by itself. And it will be declarative, regarding the problems that it does know how to solve by itself. The measure between one and the other will tell us if it is an imperative or declarative language.
SQL (Structured Query Language)
I can’t think of a more well-known, used, and prolific declarative language than SQL. Although we must recognize that the problem it solves is well defined and bounded (which makes it easier to solve it than if this did not happen), the resulting product (SQL) is an absolute gem. Its deep roots, which start from formal logics, have nurtured a powerful tree whose fruits we continue to enjoy today (and with no signs of being replaced; no, NoSQL does not replace it). I encourage you to read (see references) about the origins of such magnanimous language.
Following the definition given above, SQL it is imperative regarding the problems that specify what to do with the data, but it is declarative regarding how to retrieve that same data. Namely, SQL he does not know (does not provide ways to) decide for himself whether to combine table A with table B, but once we have told him (“Combine table A with table B”), knows how to do the rest very efficiently.
(Let’s not discuss whether SQL is or is not a language to use, but think that any process can be reduced to a finite concatenation of sentences SQL).
Let’s see an example with SQL and the implications regarding the subject at hand:
In Solveet they proposed a problem that I will simplify for multiple reasons: “They give us a list of songs, we know they can be from the 70s, 80s or 90s and the duration. They give us a portacedés and on each sheet there is room for three cederrones. They ask us to fill in the portacedés in such a way, that on each page there is a CD from the 70s, the 80s and …