There's a long history of passionate flame wars in Software Engineering (Vim vs Emacs, [Tanenbaum-Torvalds](https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_debate)), but programming language is one thing that caught everyones' attention. Even until few years ago, programming languages were *hot*. New languages were coming out every day and startups touted the vogue languages they were using (Erlang, Haskell, Scala, OCaml)[^language-companies] either as their secret sauce or a recruiting pitch. Most of us who were stuck with C++ or Java looked with envy. Ah, how self-actualizing these developers must be! Only if my team allowed this to happen! [^language-companies]: Couchbase and Riak are written in Erlang, Facebook's use of Haskell in their [anti-spam system](https://engineering.fb.com/2015/06/26/security/fighting-spam-with-haskell/), Twitter and LinkedIn's usage of Scala, and of course, [Jane Street](https://www.janestreet.com/technology/)'s usage of OCaml. Less so now in the 2020s. Programming language development is still thriving but we average programmers feel less strongly about them. As if we are picking a box of cereal in a grocery aisle, we consider them to be a simple tool of the trade. They're not perfect but we're mostly satisfied with the options we have and we move on. This post looks back on the history of obsession and why we're not anymore. # Summary - Many were passionate about languages, mostly from the following three reasons: To gain a [[#To gain a Competitive Advantage|competitive advantage]], as a [[#Means of self-expression|means of self-expression]], and to [[#To shape how you think|shape how you think]]. Being passionate (either passionately in love or hatred) with languages has a long tradition - goes back to the MIT hacker culture and Homebrew Computing Club of the 1960s and 1980s. - **2000s** - Thanks to the Internet, the industry had an explosion of programming languages. Server side programming allows software teams to use any language as long as it can produce HTML. Java, PHP, Python, and Ruby saw meteoritic rise during this time. - **2010s** - The industry has matured. Our expectations of a language have increased drastically. - **2020s** - We are now less obsessed with self-expressing via programming languages. The competitive advantage of using a niche language has diminished. We now seek for competitive advantages via frameworks and methodologies. # Reasons for a programming language ## To gain a competitive advantage > [!quote] [Beating the Average](http://www.paulgraham.com/avg.html), Paul Graham > Lisp is simply the most powerful language available \[...\] For one thing, it was obvious that rapid development would be important in this market. We were all starting from scratch, so a company that could get new features done before its competitors would have a big advantage. This is an utilitarian argument for choosing the _best_ programming language. If we agree that not all languages are the same, then some languages are better than others. If your goal is to maximize success, then why don't we choose the best language? Paul Graham's [Beating the Averages](http://www.paulgraham.com/avg.html) is the epitome of this argument. In this, Paul Graham reveals that Lisp was the secret weapon of his startup ViaWeb. The influence behind his argument - startups should use the most powerful language - is hard to overstate. Companies of this era used the most productive languages available to them - PHP (Facebook, Zynga), Ruby on Rails (Github, Airbnb, Twitter), and Python (Instagram, Pinterest, Yelp) - and built unicorns in no time. From this perspective, what determines the best (or a better) programming language? Speed, iteration, and expressiveness. It was more than languages that were happening this time. The post dot-com 2000 startup wave wasn't just about the companies but new methodologies around them. Terms such as *Minimum Viable Product* (MVP) and *Product Market Fit* (PMF) were coined in this era. * **Ability to rapidly write a software** - For a fledging startup, this was the concern of life and death. * **Ability to change software** - Methodologies of this era - PMF, MVP, Metric-driven development, A/B testing, and "pivots" - highly revered the ability to iterate quickly. * **Language's raw expressiveness, in a vacuum** - In an era where a large amount of code needed to be written (see [[#On Package Managers]]) we were deeply concerned with languages' raw expressiveness. One notable discussion is an obsession over the source lines of code (SLoC) equivalence that were frequently brought up without a concrete evidence (*ex: 1 line of Python is equivalent to 10 lines of C*). For these reasons, **dynamic languages were preferred**. Starting from a blank slate they offered the highest velocity. As computers were weak and tools were primitive, static/compiled languages offered little (high compile time, no advanced autocompletes or refactoring). Macros and metaprogramming, which dynamic languages excelled at, were also highly sought after. Velocity is the core thesis. From this, many sub-arguments emerge. Languages identified to be competitively advantageous were new or niche thus were faced with the usual criticisms - they are not proven, hiring will be difficult, and so on. The common counterargument is to pull a judo-move against it, turning them into strengths - by adopting these languages, you'll have a higher hiring bar and less need for engineers: > [!quote] Paul Graham, [Revenge of the Nerds](http://www.paulgraham.com/icad.html) > The third worry of the pointy-haired boss, the difficulty of hiring programmers, I think is a red herring \[...\] In fact, choosing a more powerful language probably decreases the size of the team you need, because (a) if you use a more powerful language you probably won't need as many hackers, and (b) hackers who work in more advanced languages are likely to be smarter. ## Means of self-expression This is an *aesthetics* instead over an utilitarian argument. Programming is a discipline where the identity and craftsmanship of an individual is emphasized (Vim vs Emacs, Windows vs Unix, ...). Joel Spolsky and Paul Graham, 2000's most prolific software essayists, wrote many articles about this craftsmanship, but they both focused on programming language as a self-expression. In Paul Graham's [Hackers and Painters](http://www.paulgraham.com/hp.html) he compares programmers to the artists, praising action, iteration, craftsmanship, and the importance of using the right tool. > [!quote] [Hackers and Painters](http://www.paulgraham.com/hp.html), Paul Graham > The right tools can help us avoid this danger \[of premature optimization\]. A good programming language should, like oil paint, make it easy to change your mind. Dynamic typing is a win here because you don't have to commit to specific data representations up front. But the key to flexibility, I think, is to make the language very [abstract](http://www.paulgraham.com/power.html). The easiest program to change is one that's very short. Joel's _[[More Joel on Software#3. A Field Guide to Developers|A Field Guide to Developers]]_, an essay on hiring and retaining good programmers, is another good example. The essay has many ideas (private offices, providing good hardwares, programmer's social position in the corporate hierarchy) but What Joel emphasizes the most is the autonomy over technical decisions. "Top recruits pick their own projects", "Arguments should be won over technical merit, not political merits". In this model, choice of a programming language via technical merits alone is the epitome of programmer autonomy. > [!quote] [A Field Guide to Developers](https://www.joelonsoftware.com/2006/09/07/a-field-guide-to-developers-2/), Joel Spolsky > > Nothing is more infuriating than when a developer is told to use a certain programming language, not the best one for the task at hand, because the boss likes it. > \[...\] > to keep the best developers, investment banks have two strategies: paying a ton of money, and allowing programmers basically free reign to keep rewriting everything over and over again in whatever hot new programming language they feel like learning. Wanna rewrite that whole trading app in Lisp? Whatever. > \[...\] > Some programmers couldn’t care less about what programming language they’re using, but most would just love to have the opportunity to work with exciting new technologies. Joel and Paul Graham are heavily influential and they shaped the 2000s tech culture. However, the tradition of loving and hating one's tools of craft goes way back. John McCarthy's MIT AI Lab didn't just invent Lisp, but the enthusiasm surrounding it. In every subsequent era, there's a new language that is finally capable of democratizing programming to the masses - BASIC, Pascal, Python ... The following is from the Homebrew Computing Club in the 1980s. > [!quote] From [[Hackers; Heroes of the Computer Revolution]] > Albrecht immediate decided that BASIC was _it_, and FORTRAN was dead. ... Albrecht became a prophet of BASIC and eventually cofounded a group called SHAFT - Society to Help Abolish FORTRAN Teaching. The nouns are different - if we replace Fortran with Java, BASIC with Python, this sentiment would find home in the 2010s. ## To shape how you think > [!quote] [How to Become a Hacker](http://www.catb.org/~esr/faqs/hacker-howto.html), Eric S Raymond > Lisp is worth learning for a different reason — the profound enlightenment experience you will have when you finally get it. That experience will make you a better programmer for the rest of your days, even if you never actually use LISP itself a lot. > [!quote] [[Coders at Work]], Douglas Crockford > ... but I never got a chance to actually _think_ in APL. In the early days of software engineering and computer science, individual languages occupied disjoint spaces. Languages were more academical and they were created around a single paradigm (imperative, procedural, object-oriented, functional, logic) with a little overlap. For example, OOP languages didn't have any functional primitives and Eiffel's *design by contract* is seen as its special sauce. In fact, there was a tendency of **paradigm purism**. Java was the language of "everything is an object" and refused to introduce lambdas[^java-delegates]. [^java-delegates]: The [linked article](https://web.archive.org/web/20060426233825/http://java.sun.com/docs/white/delegates.html) talks about C#'s _delegates_, which are akin to method pointers and later lambdas. The summary is, C# delegate-like behavior can be done in Java by an anonymous inner class, so why add a new language construct to undermine Java's object purity? In this disjoint-paradigm world, a new unfamiliar language is truly different. Even if you aren't going to use it day to day, learning a new language is considered a worthwhile endeavor . There are many articles promoting this idea: - Peter Norvig's [Teach Yourself Programming in Ten Years](https://norvig.com/21-days.html) is the most influential article from this era, preaching how important it is to know many different categories of languages. - **Lisp / Scheme** - Paul Graham's obsessed with it, SICP and HtDP (_Structure and Interpretation of Computer Programs_, _How to Design Programs_ - two introductory programming textbooks on Scheme) are still regularly recommended (_[Is SICP/HtDP still worth reading in 2023](https://news.ycombinator.com/item?id=36802579)_ was on the Hacker News front page). - [Learn You a Haskell for Great Good](http://learnyouahaskell.com/), a hugely influential online tutorial, pushes the same narrative for Haskell. - **Smalltalk** - There's a bit of _originalism_ around [[OOP]]. Namely, Alan Kay argues that the original Smalltalk is about _message passing_. I personally believe that there's linguistic confusion rather than substantial merit in this, but there's a common belief that we've diverged from the paradise and the OOP practiced by Java and C++ are unintended abominations. - **BASIC** - hard to imagine now, but it played a similar role as Python as a means to democratize programming. Interestingly, pushback against these "too easy, 'toy' languages" are ever-present, with Dijkstra's "[BASIC causes irreversible brain damage](https://wiki.c2.com/?BrainDamage)" being the most famous. ## 2000s, the golden era The 2000s were the golden era for new programming languages. The rise of the internet leveled the playing field. As long as the language can generate HTML behind a server, it can be used. This simple world before JavaScript powered "AJAX" applications played no favorites. Whichever language that can iterate fast won the market. Notable languages from this era are **PHP**, **Ruby**, and **Python**. While enterprise web applications were being written in Java (remember JavaEE / JBoss?) new hobbyists and startups chose dynamic languages. PHP's ease of development - change `index.html` to `index.php` and template sections you wish to make dynamic - is rudimentary yet bootstrapped projects of the era (Wordpress, Wikipedia). Ruby on Rails took this further, integrating other aspects of application development (Database via ActiveRecord, MVC framework) and spawned many unicorns, including Github, Twitter, and Airbnb. Python played the role of both the web language but the language of data processing, thanks to mature libraries like NumPy. On the other hand, functional programming paradigm (Haskell, ML-dialects) seem to be "just around the block", slowly increasing the awareness and the market share. C# and Java didn't stay behind. C#, initially seen as "Microsoft's answer to Java", evolved rapidly, adding generics, lambda expressions, LINQ, and async/await. Java evolved more slowly but generics and lambda expressions eventually got added. Scala seems to unite the best of the both worlds, merging Programming Language theories with the widely popular JVM. Another elephant in the room is **the end of Moore's Law** [^moores-law] and the impending dawn of parallel computing. The industry consensus was that Moore's Law, at least the single-core version of it, will end soon and we need to embrace parallel computing. Many found the existing concurrency paradigm (shared memory model, locks, and critical section) to be hopelessly inadequate and believed a new programming model and language would be the solution. A good example of this belief is [Fortress](<https://en.wikipedia.org/wiki/Fortress_(programming_language)>) , an experimental language where the `for` construct was actually a parallel computing operator. [^moores-law]: _[[Coders at Work]] (2009)_ had many humorous takes on it "*Peter Siebel:* Yet by 2019, or whatever, we’re supposed to have 1,000 cores in a notebook computer". It's hard to know whether people _actually_ believed in this. I only listed historically memorable developments. As a person who frequented the Hacker News during this era, I've witnessed so many languages pop up. The most interesting development is Startup's _creating their own language_. Many of these languages aren't public, only discussed in blog posts, but as many of these companies / founders are well respected (Joel started FogCreek, Dustin Moskovitz of Facebook started Asana), people took them seriously. - [FogCreek's Wasabi](https://jacob.jkrall.net/killing-off-wasabi-part-1) - [Asana's Lunascript](https://archive.nytimes.com/www.nytimes.com/external/venturebeat/2010/02/03/03venturebeat-facebook-co-founders-asana-gives-a-peek-at-l-16105.html) - [280slide's Objective-J / Cappuccino](https://www.cappuccino.dev/) # What changed? ## Industry has matured We have much higher expectations on Programming Languages. This is both the language itself and the ecosystem around it. Historically, A Programming Language referred to a very small surface area - the language's syntax, type system, and semantics. Notable exceptions are _libraries_ (As far as the PL theory is concerned, they are trivial thus the standard libraries are either absent or bare minimum) and _compilers_ (as they are seen as mere implementation of the language's spec). Languages nowadays have thick standard libraries ranging from collections library, JSON parsing, and making network requests. Many systems such as build system, IDE support (for syntax highlighting and autocomplete) and package managers are seen as an integral part of a programming language. Independent of that, we holistically consider the culture and the ecosystem as part of a language. It's hard to think of Ruby without Rails, Python without NumPy, JavaScript without React as they aren't mere libraries but an essential part of the ecosystem and culture. Similar to how Linux technically to the kernel alone but colloquially referred to the entire OS ecosystem, these *thick* languages include components that are not part of theoretical PL researcher's definition of a language. ## On Package Managers It's hard to imagine how we used to programmed without package managers. [CPAN](https://en.wikipedia.org/wiki/CPAN) (1995) is the first mainstream package repository and it has been described as Perl's "killer app". Many other languages followed - Maven for Java, PyPi for Python, and NPM for Node - and package managers are now the norm and indispensable. This drastically changed the nature of open source softwares from "download source code tarball and configure the Makefile" or "download this jar file and add to the `$CLASSPATH`" to "`npm install`", greatly improving the experience. Package managers have reshaped the nature of software development and **heavily eliminated the competitive advantage of using a better-in-a-vacuum language**. The supposed allure of these languages (Lisp, Haskell - you name it) is that they are so expressive and productive that one can ship faster with a leaner team. This may have been true in the 2000s. Back then, one needed to roll out your own building blocks. They can range from simple data structures, serialization and parsing libraries for integrating with data formats - JSON, HTML, PDF, to web frameworks. In this world, using the most powerful language offers a legitimate advantage as you need to conceive and write the foundation real fast. Package managers have their own issues (dependency conflicts, supply-chain vulnerabilities, software bloat[^node-modules]) but we cannot live without them. When I learn a new language, installing a package is one of the first things I learn. In this world, It's more important to pick a language with a mature ecosystem than the one that can write the most concise quicksort. [^node-modules]: Count memes on the size of the `node_modules` directory, like [link](https://twitter.com/diessicode/status/1000126094870278150) ## Cost of creating and maintaining a language Languages used to be created by a single person. We associate Guido van Rossum with Python, James Gosling with Java, Larry Wall with Perl. Some of this is a great man fallacy given that it takes a team to create and maintain a language. However, it's true that in the early years they were personally responsible for writing and designing the majority of the language. With the new expectations, it takes a team to write and maintain a language. **Recent new mainstream languages are all backed by large companies** - Go (Google), Kotlin (JetBrain / Google), Swift (Apple), and TypeScript (Microsoft). This is especially true as the business models around software tools have changed drastically. Other than JetBrain, there aren't independent software shops that are selling languages and IDEs (imagine Borland's TurboPascal). Moreover, these language projects have no direct contribution to the company's bottom line. At least with Kotlin and Swift, they're platform investments to Android and iOS, but the relationship is less direct with Go and Typescript. Thus new languages are reserved for companies with deep pockets. One notable exception is Rust with its meteoric rise on popularity alone. One can witness the "language as a single person project" mindset in the paper _The Early History of F#_ by _Don Syme_, creator of F#. The paper is full of interesting tidbits (Microsoft planned to launch 7 commercial and 7 academic languages with the .NET framework, dubbed _Project 7_) but the most important is the author's enthusiasm to write a new language: > [!quote] [The Early History of F#](https://www.microsoft.com/en-us/research/publication/the-early-history-of-f/), Don Syme > When time permits I plan to implement a .NET CLR compiler for Caml [...] My first reason for doing this is because ==I have an existing OCaml code base that I would like to make available as a .NET library==. > *- October, 2001* From today's perspective thinking that porting a language to port a mere library is desirable seems crazy. However, this was still an era where a language can be implemented by a single person and people were eager to do so. You may have noticed that startup-created languages - Wasabi, Lunascript, Cappuccino - are all dead. Creating a language is fun and games, but maintaining it is gargantuan, especially with the modern high standards we have on languages. ## Software Engineering vs. Programming > [!quote] Software Engineering at Google > Software engineering is programming integrated over time. The previous section [[#Cost of creating and maintaining a language]] explored this from the language creators. Individual software teams using these languages faced a similar challenge. Ironically the rise server-side web programming, which unleashed the golden era of languages, changed the nature of software engineering. Compared to applications (we're talking about the *shrinkwraps* that got shipped, not the *Apps* that gets downloaded) server code have longer life span - they'll exist as long as they're relevant for the business. They're also more complex as they accumulate all scaling challenges and shifting business needs over time. Thus the software industry shifted its attention towards the long-term sustainability - architecture, code quality, maintainability, and refactoring - over the short term velocity. As previously noted, writers of this era dismissed problems associated with niche languages (Paul Graham's writing on hiring; Joel's nonchalant comment on Lisp rewrites). However, the problems were legitimate. The [[#On Package Managers|ecosystem]] deficiency of needing to write all your dependencies wasn't just about the immediate work. **It increased the software's surface area and increased the software's total cost of ownership**. Hiring, sometimes retorted as a benefit to adhere to higher standards, was a concern too - in the startup boom companies' demand for software engineers went through the roof. Finally, the code's longer expected life meant that now it is more likely for them to be maintained by unfamiliar eyes. Unfamiliar language adds to this friction. One of the best example of this is from Coda Hale of Yammer. Yammer started off as a Scala shop but transitioned to Java due to the aforementioned concerns: > [!quote] [The Rest of the Story](https://codahale.com/the-rest-of-the-story/), Coda Hale > In hindsight, I definitely underestimated both the difficulty and importance of learning (and teaching) Scala. Because it’s effectively impossible to hire people with prior Scala experience ... If we take even the strongest of JVM engineers and rush them into writing Scala, we increase our maintenance burden with their funky code; if we invest heavily in teaching new hires Scala they won’t be writing production code for a while, increasing our time-to-market. ... ==if new hires write Java, they’re productive as soon as we can get them a keyboard==. > ## Languages are collectively better (and similar) Most mainstream languages are multi-paradigm and more alike. Turns out, most aspects of a language aren't fundamental and existing languages can evolve to adopt them. We're no longer zealous about paradigm purity - if another successful language has a well-loved feature, it's a low-risk, high-return thing to copy each other. - **Functional Programming**, once dedicated to a separate category of languages, is embraced by most programming languages. Immutability and side-effect free code are universal best practices even if they aren't enforced by the language. - **Types and Generics** are added to languages (C# 2.0, Java 5, Go 1.18, TypeScript). - **Object Oriented Programming** (OOP) was once the hottest buzzword and most languages adopted some notion of the OOP. Nowadays, the buzz has waned (at least the classical, implementation-inheritance based polymorphic OOP). Languages don't accentuate OOP or actively market against it (Go). Same goes for frameworks (React's migration from class components to functional components). - **Dead evolutionary branches** - purely functional and logic programming languages exemplified by Haskell and Prolog have failed to achieve mainstream mindshare and didn't live up to their dreams. They were supposed to be the next big thing 10 years ago, but one cannot stay as a yet-to-bloom underdog forever. Still, even amongst the mainstream languages, not all languages are the same. Fundamental constructs such as Rust's ownership, Zero-cost abstractions via reified generics (Rust, C++), and threading models (JavaScript, Go) are not easy to add to an existing language without an explosion in complexity or inconsistency. However, the common denominator between all languages is ever larger than before. This greatly diminishes [[#To shape how you think|Shaping one's thinking process]] argument of learning different languages. Sure, there are still other languages (both mainstream and academic) that one can learn from, but the mainstream languages are more similar than different and the academic languages are even less practical as the mainstream languages are so much *thicker* than before. ## The death of [[Moore's Law]] have been greatly exaggerated It's hard to convey the 2000's industry-wide fear around the end of Moore's Law. Psychologically, it's comparable to the 20th century fear of oil depletion. Software industry benefited from the ever-increasing performance of computers. What would we do if the ever-giving stream dries up? It's a neo-Malthusian fear (computing growth stagnating while computing demand ever growing) that captivated many of us. Moreover, parallel, concurrent, and distributed computing was still new (at least more theoretical than real for the most practitioners) and people found the existing concurrency primitives to be terribly inadequate. The widespread belief during this time was that a **new programming model and language was necessary for the new parallel paradigm**. The [C10K problem](https://en.wikipedia.org/wiki/C10k_problem) was the major problem statement that popularized asynchronous IO, including Node.JS. New programming models such as the [Actor Model](https://en.wikipedia.org/wiki/Actor_model) and [Software Transactional Memory](https://en.wikipedia.org/wiki/Software_transactional_memory) (STM) were seen as the future. However, the future turned out to be more mundane. Even though there are some interesting developments (Erlang, Node.JS, Go) we are still using the programming language and models of the past. Our code is mostly imperative functions, which we didn't abandon for the Actors or STMs. What happened? - We're living in the world of **_revised_ Moore's Law**. Doubling periods have slowed down but it didn't grind to a halt. Semiconductor density is still increasing. The increase in density now translates to the number of cores rather than the the single-core performance. However, the single core performance is still growing and applications are benefiting from it. - **More workloads are [embarrassingly parallel](https://en.wikipedia.org/wiki/Embarrassingly_parallel)** than we've anticipated. For example, most server's requests are independent from each other, thus they can easily scale to multiple servers with a load balancer. In fact, the best practices evolved to externalize concurrent programming (to a load balancer, or to MapReduce, or a distributed database) so that the application stays as single-threaded as possible. - **Rise of the specialized hardware**. Contrast to the general-purpose computing which started to plateau, most performance intensive operations such as graphics and video decoding are hardware accelerated and invoked via libraries. Modern GPGPUs do resemble the 1000 core machines envisioned two decades ago but because the workload is so well defined (matrix operations, video decoding, graphics, AI, ...) that application developers are abstracted away from them. Because of these reasons, there was no upheaval in the mainstream programming languages[^julia]. [^julia]: There's still an appeal for an efficient language like [Julia](http://julialang.org/), which aims to be efficient enough to eliminate the need of C modules. As libraries like NumPy and PyTorch do the heavy lifting, Python isn't hindered by its inherent inefficiency. However, the chasm between *Plain Python* and *C Module* is quite high, source of many frustrations and knowledge gap. A notable exception is the mainstream adoption of promises and _[async/await](https://en.wikipedia.org/wiki/Async/await)_. Originally introduced to F# in 2007 and adopted by C# in 2012, it is now part of JavaScript and Python. **Async/await is perhaps the greatest improvement to JavaScript**. For JavaScript neither multi-core performance nor parallelism impacted much (single threaded, running in a browser, mostly for the UI). However, JavaScript is highly asynchronous (event handlers and XHR requests to servers) and strained the old callback-based programming model. Even with libraries such as [async.waterfall](https://caolan.github.io/async/v3/docs.html#waterfall) , callbacks were still too confusing, leading to fragile and difficult to maintain spaghetti code. ```javascript async.waterfall([ function(callback) { callback(null, 'one', 'two'); }, function(arg1, arg2, callback) { // arg1 now equals 'one' and arg2 now equals 'two' callback(null, 'three'); }, function(arg1, callback) { // arg1 now equals 'three' callback(null, 'done'); } ], function (err, result) { // result now equals 'done' }); ``` *Example code from async.waterfall's documentation - do you follow it?* [Promises](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) brought structure to asynchronous programming but the code still had to be structured as chained promises. Finally, async/await allowed asynchronous code to maintain the familiar structure. There are still few sharp edges - pre-generator async polyfill is quite complex and calling async code from sync has many caveats - but async/await's simplicity is unparalleled. *Async/await* is deceptively simple and almost any language can evolve to accommodate it. Compared to alternative futures (parallel `for` loops, Actors, or STMs) the current language is a minor modification from the past. Yet, the industry didn't implode due to the end of Moore's Law. # What now? ## Evaluating failed and successful languages > [!quote] [Joel on Software](https://www.joelonsoftware.com/2002/05/05/20020505/), Joel Spolsky > It seems like .NET gives us a "choice" of languages precisely where we couldn't care less about it—in the syntax. Looking back, many languages of the 2000s are doomed from the beginning. They either emphasized on syntax[^better-javascript] or attempted to bring a dynamic language to a runtime (Jython, JRuby, Groovy, Boo, IronPython, IronRuby). [^better-javascript]: There were many variants of so-called "better" JavaScript. It was a popular consensus that JavaScript is the [assembly language of the web](https://www.hanselman.com/blog/javascript-is-assembly-language-for-the-web-sematic-markup-is-dead-clean-vs-machinecoded-html), too low level and unpleasant to use that it would only serve as a compiler target in the future. Difference in syntax is not interesting enough to sustain itself. Microsoft's CLR (Common Language Runtime) initially supported VB.net and ASP.net, but it's the monoculture of C# now . We used to have more preference or disdain for syntax - C-style curly bracket language versus the others - but the industry has mostly converged. If people are accustomed to the syntax of curly brackets, why divert from it and introduce a cognitive speed bump? **Coffeescript** is the latest success of mainstream excitement on syntax alone. Many quality of life features (arrow functions, object destruction) were boon. However, once ECMAScript polyfills got popular, Coffeescript's popularity waned as polyfills are less drastic, less proprietary, and can be incrementally adopted. Notably, the only winners of the language that transpiles to JavaScript are TypeScript and JSX. TypeScript wasn't just "syntactic shim" over JavaScript; it offered a powerful yet flexible type system on top of JavaScript to facilitate much larger teams and codebases. JSX is just a syntactic shim, but it's a very lightweight and un-intrusive shim that contributed to React's popularity. **Kotlin** and **Swift** are designed to modernize an already successful platform (Android and iOS). The market appetite was already present. Moreover, both of these languages were carefully designed with interoperability with Java and Objective-C for gradual adoption. **Rust** is a fascinating example as it positioned itself as a modern competitor against C++. C++ was once a universal programming language but its market share decreased over time as new players carved off areas. However, there were still areas where C++ was the only tool for the job (low level memory management is needed while having access to higher level constructs) . Rust expanded C++'s design pattern (RAII), introduced modern semantics (borrow checking) and became the first language where it can compete C++ in areas that had no alternatives. There are still new languages that are captivating peoples' minds. Go and Rust exposed that Systems Programming is an underserved market for new programming languages and younger players like Zig are attempting to break into this market. ## Competitive advantage, again Maximizing competitive advantage is still important, but we moved on from programming languages. A correct language choice is still important and can create or destroy a competitive advantage but the argument is more empirical. Ecosystem plays a big role here and **the language's expressiveness in a vacuum is less of a concern**. Moreover, most modern mainstream languages are so capable that one can't go too wrong with one choice and choosing _outside_ one is a prohibitively expensive untraversed path. Instead, we turn attention towards frameworks, databases, cloud computing, container orchestration, and other methodologies as sources of competitive advantage (LLMs such as Github Copilot is the latest entrant in this arena). Languages are still evolving but we no longer expect it to be the sole means to elevate the programming experience. In fact, we treat languages as a dependable foundation and turn our attention to these newer thus seemingly limitless (or at least without a _known_ limit) developments.