Picking a frontend framework isn’t easy, but some clarity around the criteria can help everyone understand the final decision.
The problem of picking a frontend framework is so large and amorphous that architects find it worthwhile to break the problem down into criteria that can be assessed individually. That reduces but doesn’t eliminate the complexity, however, because there is no one frontend that is superior across all of these criteria. Still, by handling each of the five essential criteria individually, architects get some clarity around what matters.
There’s also a sixth criterion that gets more attention than it should. But, from an architectural point of view, it’s not anywhere near as important as the first five.
At least three criteria are relentlessly practical and are considered first, primarily because these criteria allow architects to take some contenders off the table, reducing the size of the decision space. These criteria are considered so “obvious” that they are often applied without explicit acknowledgement.
Second, and closely related to compatibility, are any issues related to the organization’s “areas of concern.” For example, organizations involved in cartography are driven by their Geographical Information Systems tools and will have committed to some specific toolset; a financial services company will depend upon a toolset that generates volume trading charts based on streaming data; hospital toolsets will be compliant with regulations concerning who can see what information and under what circumstances. Organizations with specialized backends like these will sacrifice any number of criteria for a framework with components that support that functionality rather than give up the toolset they depend on.
Closely related to this is what development tools the organization is using: the dev shop’s “areas of concern.” Leaping to a new framework that requires completely different tools/components doesn’t mean the organization gets to abandon its old toolset—the shop still has to maintain all of its existing applications. Having two disjunct toolsets isn’t a good thing (there’s a reason that tools, like Telerik, that support multiple frameworks strive to make components in different environments work in similar ways).
Third: Performance. Does the frontend run “fast enough” for the kind of applications that the organization needs? I’m not suggesting that architects pick the fastest framework: “fast enough is good enough.” But applications that can’t be easily built with “fast enough” performance will force developers to violate best practices to achieve “good enough” performance. With a framework that isn’t “fast enough,” design will be sacrificed to expediency. Architects don’t like that.
The next two criteria are more philosophical, though, and less prone to any kind of measurement.
The fourth issue is how opinionated a framework is about the way applications should be built: The paradigm that describes what a well-architected application looks like. Some frameworks are more “opinionated” than others when it comes to implementing the enterprise design patterns that architects value (and there are no frontends with “no opinions”).
Angular, for example, is relatively opinionated in how it assumes that applications will be built and, as a result, provides all the tools (state management, routing, dependency management, and so on) needed to make it easy to build applications that way. React, on the other, is less opinionated and assumes that you’ll add in the tools you want to build your application the way you want… as long as you commit to one-way databinding for moving data around.
This is an issue that reasonable people will disagree on. If an architect likes a framework’s paradigm, opinionated software that prevents developers from doing things that are stupid/wrong while encouraging them to do the right thing is a good thing. Furthermore, by providing a fixed toolkit, opinionated software fosters the growth of expertise (knowing what the error messages really mean, for example). There’s the obvious downside, though: If there’s some special case that doesn’t fit the paradigm, the framework may force an awkward design or even prevent handling the case at all.
Frameworks with fewer opinions give shops more flexibility, which other architects prefer. But it’s easy to exaggerate that benefit. Architects really only get to use that flexibility once, as individual tools are added to the framework. Effectively, every shop becomes opinionated even if the framework the shop uses is not. While the shop gets the possibility of bringing in some new tool to handle a special situation, architects generally feel that increasing the size of the toolkit isn’t a smart move. So, what non-opinionated software actually lets architects do is defer making decisions in some areas until necessary. That’s obviously a good thing and leads to the fifth criterion: future-proofing designs.
No matter what anyone says, in enterprise architecture, truth is not immutable: The way that applications are designed today is not the way they will be designed tomorrow. The fifth criterion assesses frameworks both on their ability to evolve and how well the framework’s ecosystem generates.
For example, going forward, architects using event-driven designs will value components that integrate well with gRPC services, especially for organizations where performance is key. Frameworks with components that will integrate as well with gRPC services as they do with the current crop of RESTful services are more attractive to architects.
You may feel that I’ve omitted a key criterion: Leveraging programmer knowledge. Smart architects should value that as a potential sixth criterion… just not very much.
Architects should certainly prefer a framework that leverages existing developers’ knowledge over a framework that doesn’t because retraining is expensive. But, unlike the previous criteria, which involve ongoing costs, an organization only pays for retraining once. And even when moving to a new framework, much of the conceptual knowledge that developers possess is transferable (especially if the framework lets them use similar tools and components). Retraining your staff every decade (or so) is better for the organization than hanging on to a frontend that doesn’t support the other five criteria.
In this area, the real concern isn’t the expertise inside the organization, it’s how available (and expensive) outside expertise is. If outside resources are very expensive, it’s a sign of one of two problems: Either you’re hanging onto a framework that’s becoming increasingly obsolete and the remaining developers more expensive (see: COBOL), or you’re adopting a framework that no one has much experience in and you’re going to be on your own when you hit a problem (see: obscure tool of your choice).
Even with all of this, smart architects also recognize that, whatever framework is picked, three months later there will be a problem that would have been more easily solved with a different framework. Life’s like that. However, by explicitly using these criteria, the process acknowledges both the trade-offs that were made and the reasons that drove those trade-offs. The decision may not be “right” in some absolute sense, but it will be understood. That’s about all you can hope for.
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.
Subscribe to be the first to get our expert-written articles and tutorials for developers!