How technologies progress

2021-01-10, by Marc-Antoine Beaudoin

Thankfully, the first planes to ever take off look nothing like those we use today. Certainly, such progress requires scientific discoveries and engineering development. But less obvious, I believe, is how research and engineering interact together to make a technology progress. That’s what I want to discuss with this article.

Ideas, knowledge, and products

First, we need a map to navigate the concepts. We need three ingredients to make a technology:

  • knowledge, to describe the physical phenomena we observe,

  • ideas about how to make things works, and

  • products—things that work the way we intend them to.

The process to go from ideas to an actual product is called engineering product development. And the process to discover new knowledge is called research.

A map of technology progress

With this article I wish to correct the misconception that technological progress happens in a linear fashion, where scientists make new fundamental discoveries, which engineers then turn into useful products. The reality is a little more complex. If I challenge anyone to make a lightbulb, I doubt they would benefit from knowing Planck’s law for black body radiation. Edison—and the many people that made light bulbs before him—didn’t use it. In fact, they didn’t have it. Edison made its first successful attempt around 1879, and Max Planck published the law in 1900. The same goes for boundary layer theory. Very useful in aerodynamics, the theory can help predict important phenomena such as boundary layer separation. It was published by Prandtl in 1904; the Wright brother had already flown at Kitty Hawk in December 1903. It seems as if inventors don’t really need to fully understand their prototypes to make them work. It really helps however—and we’ll comeback to this later—but historically, it looks like doing comes before understanding. Now, before making the connection between research and engineering, let me discuss them separately.

What is research?

The goal of research is to expand the boundary of human knowledge—typically captured in scientific publication. A scientific paper is essentially an argument, which is backed up by evidence, and vetted by independent experts. The method to create new knowledge goes as follows: one starts with prior knowledge, asks a research question, conduct an experiment to gather evidence, and argue for meaningful conclusions in a publication. If the reviewers (mostly) agree with the researcher, bingo, the research gets published and it becomes new knowledge. This process is not perfect, but it is the best we have.

To illustrate the process, let’s look at how the theory of special relativity came about. Before Einstein, it was already known that light travels at a limited speed. It was predicted by Maxwell’s equations of electromagnetism and verified experimentally. However, dissonance arose form certain thought experiments. Imagine that a train travels at half the speed of light, and that it carries a light source. What, then, would be the speed of a light beam emitted in the train’s direction of travel? According to the framework of Newtonian mechanics, velocities are simply added one on top another, so the beam would travel at one and a half times the speed of light. But how could it, if the speed of light has a fixed maximum? Instead, one could argue that the beam travels at only half the speed of light with respect to the moving train, so its “total” speed would still be exactly the speed of light. But that’s also not suppose to happen: why would the laws of physics—here, electromagnetism—change when travelling at high velocity?

Physicists of the time tried several ideas to resolve this conflict, and it’s Einstein that finally made sense of it. In 1905 he published a paper proposing that the speed of light is indeed fixed, and that the laws of physics are the same for any inertial frame of reference. So, light would travel at the same speed for an observer standing still and an observer traveling very fast. But for that to be true, time needs to run slower for the observer traveling very fast. He proposed the concept of time dilation and length contraction, and showed that it resolves many of the known dissonances with the theories of the time.


To create new knowledge, Einstein started from well-established theories (that of Newton and Maxwell), formulated a research question from an intriguing dissonance, worked out an idea, and argued that it solves an important problem. Details vary, but the pattern remains the same for typical scientific discoveries.

The engineering design process

What is engineering product development?

The goal of engineering design is to create a product that satisfies customer needs. Thus, a design project typically begins by defining the product and the intended client. The project is given a GO if the estimated market size for this client-product combination justifies the estimated product-development cost. I will use the design of a new electric car as a working example, and the intended customers are parents with a young family. The next step in the design process is to transform client needs into product specifications. For instance, parents might want to bring kids to sports tournaments in neighbouring cities, so it would be best to reduce range anxiety. A driving range of 400 km per charge seems a good target to aim for. Then, the conceptual design phase consists of choosing appropriate concepts and creating the product architecture. Engineers might judge that choosing a Li-ion battery chemistry is the best option for the targeted product specifications. They might also choose to place the motor and transmission as close as possible to the driven front wheels to minimize drivetrain mass, and opt for an integrated motor-transmission powertrain to maximize front-trunk volume. Then, technical specifications are attributed to each concept such that the global specifications can be met. For instance, the powertrain efficiency must be such that the battery capacity is sufficient to reach the 400-km driving range. At this stage, concepts are changed if they are deemed inadequate for the product specifications. Then, the detailed design begins. This is by far the most expensive and time-consuming part of the design process, as every detail of the vehicle has to be worked out. The validation process for a complex product is typically done in stages, validating individual parts first, up to the complete prototype. Depending on the industry, the prototype(s) will have to go through a series of testing and certification procedures before the design is finally approved. The picture above takes a V shape because the design part of the development process goes from general to specific, and the validation part, from specific to general.


Research and engineering design differ in an important way: researchers know where to start, but they don’t always know where their research will lead them too; whereas engineers already know where to aim at, but they need to find a way to get there.

Turning knowledge into products

Let’s now discuss why it is hard to turn new knowledge into new products. First, one should not start a product development in the middle, with the solution first. If there is no need for a new equation, material, or concept in a design project, then it should not be used. The goal of a product is to satisfy a client, not the design engineers.


Moreover, new discoveries are typically very precise and narrow in scope. To find use in commercial applications, they often require other complementary discoveries, or other complementary capabilities, such as manufacturing capabilities for instance. Somehow, science reporters commonly overstate the immediate practical implications of a lab discovery. Perhaps certain scientists also oversell their research.


Artificial neural networks provide a great example. They were invented (or discovered…) way back in the 1950s. A famous one was Frank Rosenblatt’s perceptron of 1958. It consists of the simplest possible neural network architecture: one input layer, and one output layer with a nonlinear activation function. The perceptron was used to classify various inputs, but it was quite limited. It was later discovered that for neural networks to be truly powerful, they need to have at least one hidden layer between the inputs and the output, and the hidden layer must also contain nonlinear activation functions. However, these multi-layer perceptrons are harder to train. A good way to train them turns out to be using backpropagation—essentially the chain rule of differential calculus. Backpropagation was reinvented several times in history, but it gained significant momentum after a Nature paper by Rumelhart, Hinton, and Williams in 1986. This also coincided with the rise of computational power, which is another necessity for harnessing the full potential of neural networks. All in all, it took several decades for artificial neural networks to actually become useful.

Better knowledge accelerates engineering

Thus far, I discussed the difficulties of turning knowledge into products. But of course, new knowledge can also accelerate a design process—it is one of the premise of this article after all. Research can generate new design concepts, new product architectures, or new materials to choose from, which are all direct contributions. However, a very important type of research outcome is engineering tools.

There are two ways to validate a design concept: using mathematical models or physical experiments. A model takes as inputs the characteristics of the design, and outputs its behavior. Analyzing the model’s output allows to judge the design’s adequacy. If a model is not representative of the real-world however, it can lead to false conclusions on the design. For this reason, physical experiments are sometimes used instead, as their outcome is typically more informative. But experiments are expensive and take a longer time to conduct, which makes them less desirable. A good quality model allows for faster iterations on the design concepts, and ultimately accelerated the design process. To have a good quality model, engineers need well-defined physical laws and properties, which are the result of research efforts.


The Lockheed F-117 provides an interesting example where limitations on the design tools directly affected the shape of the final product. The plane was developed in the mid-70s by the Skunk Works, and is characterized by its flat surfaces. Intended to be a stealth aircraft, engineers needed to minimize the plane’s radar signature. Radars detect objects by sending electromagnetic waves and measuring what is returned to them. To minimize detection, the goal of the F-117 design was to deflect the waves away from the radar. During the design process, engineers used a computer model to estimate the amount of energy that would be sent back to a radar. But both the computational power of the 70’s and their mathematical model for wave reflection were limited, so they had to limit themselves to modelling the aircraft as an arrangement of flat planes.

For the F-117, engineers had computational tools and a math model; they were just a bit limited. The Wright brothers on the other hand, they barely had any model to work with. To design appropriate wing and propeller profiles, they had to resort to physical experiments. They built a homemade wind tunnel, and trough a lot of trial and error, they converged to profiles that eventually worked.

Given that doing often comes before understanding, I would argue that trial and error is almost inevitable for cutting-edge technologies. Take the F-1 rocket engine for example, the engine that powered the first stage of the Saturn V, the rocket that carried humans to the moon in 1969. The F-1 engine was enormous—to this day, it is still the most powerful single chamber rocket engine to have ever flown. The design team for the F-1 was surely knowledgeable, but they still designed the injector plate for the combustion chamber by trial and error. The challenge is that as rocket engines scale up, they become more susceptible to combustion-induced instabilities. Modelling the combustion dynamics was definitely out of reach, so engineers simply iterated though several injector designs. The experiments consisted of detonating an explosive charge in the combustion chamber to generate oscillations, and observe whether they would dampen down or grow exponentially. Trial and error remained the best—if not the only—solution they had, even if an “error” meant a massive explosion.

In several industries, companies are required to conduct experiments to get certified. Crash tests are a good example. Modelling the physics of car crash is not trivial. Of course, engineers are getting pretty good at it, but historically it was not the case. Thus, crash tests are still mandatory for now.

Despite the rich information that can be extracted from a physical experiment, it remains that whenever possible, mathematical models are preferable to iterate through design concepts.

Products motivate research

Thus far, I have discussed about how research benefits engineering, but not how engineering benefits research. Working products can help motivate, guide, and fund research projects. A good engineering team should be able to articulate the fundamental limitations in their designs, which can be turned into research objectives. A good example is turbine blades for gas turbines. Thermodynamic models of gas turbines reveal that an increase in turbine inlet temperature results in an increase in engine efficiency. In the 1920’s, this initiated a long line of research into better materials and manufacturing techniques for turbine blades, which is still active today.


Another good example is battery technologies. The current level of enthusiasm toward battery research can only be attributed to the gradual increase in popularity of electrical vehicles, and their predicted market share.

Fundamental research

The previous section covered the main justification for applied research. Fundamental research also plays an important role in technological development, only less directly. While applied research is aimed at specific engineering problems—e.g., turbine-blade research—fundamental research is primarily motivated by the need to satisfy human curiosity, like Einstein with his theory of special relativity. In practice, research projects exists on a continuum between the fundamental and applied designations, but I will suppose a dichotomy for the sake of this discussion.

It is often said that fundamental discoveries can eventually lead to new technologies. A famous example is how compensating for relativistic effects is crucial for the proper functioning of the GPS system. Thus, there is no doubt that nations should invest in fundamental research. Less obvious however is whether companies should do so as well.


Sometimes fundamental theories are a source of inspiration for new technologies. A good example is how the Doppler effect inspired satellite-based navigation. Shortly after Sputnik’s launch in 1957, scientists at John Hopkins’ Applied Physics Laboratory realized that they could use the Doppler effect to track down the location of the satellite in space. This then inspired the idea for the reverse application, where a user can determine their location on earth based on a known satellite position. As with any invention, it’s not quite clear whether it is the only time satellite-based navigation was “invented”. But nonetheless, this one was inspired by a physical phenomena. A good understanding of fundamental theories will not only allow to look at a problem differently and find new solutions; it may also inspire new applications altogether. But as discussed previously, it is difficult to turn knowledge into products, so companies should not expect to make it a repeatable process.


Yet, some companies choose to invest in fundamental research despite the odds. I will now discuss two examples: Bell Labs and Google DeepMind. The Bell Telephone Laboratories was created in the early 1900’s by AT&T, which had an illegal but somehow tolerated monopoly over telephone communications in the U.S. This tolerance was in part due to a deal with the U.S. Government where Bell Labs would make the fruits of their research—their patents—available to competitors for small royalties, and refrain from interfering with their use. Nevertheless, AT&T was immensely rich; it had a revenue that represented 1.9% of the U.S. GPD in 1956. Surely, Bell Labs was very innovative—it is where the transistor was invented, and several of its scientists were awarded Nobel Prizes. But knowing the exceptional nature of this situation, it is difficult to argue that emulating Bell Labs is a winning strategy for companies operating in a regular competitive environment.

Alphabet seems to be investing in somewhat fundamental research with its DeepMind subsidiary. I think that the activities at DeepMind are about cultivating a pool of internal capabilities, making sure that Google remains at the state-of-the-art, and exploring potential commercial applications. Surely, developing an autonomous agent to play video games cannot be the real problem of interest. But anyone at DeepMind working on a hard problem of this sort contributes to Google’s cumulative knowledge, and also gains in personal knowledge and capabilities, thereby making Google more effective in solving problems pertaining to real commercial applications.

Technology progress is a positive feedback loop

As an answer to the original question, I would argue that technologies progress with the feedback interaction of research and engineering. Oftentimes, this interaction takes place on long cycles, with the research and engineering teams being totally disconnected. A research team can publish a paper that will benefit a non-related engineering team with their product development, maybe without ever knowing it. But sometimes the R&D cycles can also be very small. An example is the invention of the transistor in 1947 by Bardeen, Brattain, and Shockley at Bell Labs. These well-trained physicists both advanced the understanding of solid-state physics and developed the first working transistor prototype concurrently. I believe this is a remarkable example however, not necessarily the norm.

Why so slow?

The next question is what limits the rate of technological progress. Take for example the blended wing-body aircraft architecture. It has been over 60 years that commercial aircrafts are mostly based on the classical tube and wing architecture. Below are pictures of the Boeing 707—a model that flew for the first time in 1957—and the 787, with a first flight in 2009. Of course, things have changed: the 787 is about twice as efficient as the 707, which is a tremendous improvement. This is partly due to the engines’ larger bypass ratio, the lighter weight of composite materials, the extensive use of electrical components to remove some hydraulics, and even a boundary layer injection system on the tail of the aircraft. There is no doubt that plenty of money and research efforts went into the technological developments. But we still don’t have commercial planes based on the blended wing-body architecture, despite the well-documented (theoretical) superiority of the concept. Why is that?

The blended wing-body concept

I believe design inertia have two root causes: the need for reliability, and the desire to maximize short term profit. First, let’s explore reliability. Reliability is expensive, and can only be obtained through extensive engineering. A crucial output of the engineering process is predictability—a well-engineered product will do exactly what it is intended to do, all the time. Engineers need not only to make things work, but also understand and characterize the limits of their products. For products where the tolerance for failure is extremely low—e.g., an aircraft—reaching an acceptable level of reliability becomes a tedious process. It involves more than simply testing the product, it is also about validating all the design assumptions and mathematical models used during the design process. The goal is to build confidence in the predictions obtained during design analysis. When a design is completely new, as opposed to an iteration on a previous design, predictability is very expensive. In effect, it becomes harder to recycle the design-validation efforts of previous projects. Everything has to be restarted from scratch.

One aerodynamic advantage of the blended wing-body concept comes from the tailless design. It makes the aircraft harder to control, however. It’s not impossible; for instance, Northrop Grumman did it in the 80’s with the B-2 Spirit Bomber. Nevertheless, to develop a control law for a blended wing-body design, Boeing could not simply recycle a control law from the 787 design. To foster advanced concept exploration, NASA provides assistance for this kind of early technology development. For example, Boeing and NASA collaborated on the X-48 program to better understand the aerodynamic characteristics of blended wing-body aircrafts. For new aircraft designs, the road is long between the proof-of-concept stage and commercial use.

The X-48 experimental plane

Finally, let’s discuss how the desire to maximize short term profit creates design inertia. The electric vehicle is a prime example. In June 2021, the Tesla Model 3 became the first electric-car model to reach 1 million sales. It could have been done by any other long-standing automaker, but it wasn’t. It was achieved by a company that was not even 20 years old. The Innovator’s dilemma suggests why successful companies tend to neglect new technologies in the short term. New technologies typically underperform at first, but can (sometimes) outclass established ones in the long run. It can be attractive for established companies to capitalized on their technological-development efforts of the past—and profit off their current market-leading position—by simply incrementally improving their products, and avoiding drastic technology changes. To break off the pattern, established companies could aim for smaller markets at first, and gradually transfer the new technologies to the larger ones, much like a start-up would. After all, this is exactly what Tesla did by starting with the Roadster, then expanding with the Model S, and eventually the Model 3. Again, there is no reason to believe that it could not have been done by a long-standing automaker. This suggests that a choice was made: let the electric-vehicle market grow first, then tackle it when it is too large to ignore. While the Innovator’s dilemma mainly concerns which companies get the larger part of the pie, I would argue that this tendency also affects the rate of technology progress. Could we have had electric cars a few decades earlier?