Bridging the Gap Between Social Inertia, Tech Hyper-Acceleration
[ad_1]
In 1959, British writer CP Snow delivered a famous lecture at the Cambridge University, ruing the deep chasm between the realm of science & technology, and that of arts and culture.
‘The Two Cultures’, he said are disparate and disconnected, when there’s a need of greater cooperation, coordination, alignment of purpose, commonality of mission, and a great scope for synergistic enrichment between them.
While sweeping technical advances have lead to the age of digitalization, ‘information overload’ and technology diffusion across all aspects of our lives today, the boundary between tech domains and wider society and culture, paradoxically, are still impermeable in many ways.
The tug-of-war between automated futurism beckoning us, and the inherited social, institutional, cultural, and political frameworks that we cling to, creates an anxious dilemma and hobbles progress.
The theme of the fascinating ‘Exponential Age: How Accelerating Technology is Transforming Business, Politics and Society’ is this glaring disconnect and the inability of society, cultural norms, and institutions to keep up pace with tech innovations. The widely acclaimed book ranked at the top in the Financial Times and Sunday Times Book of the Year charts last year.
Economist and public thinker Mariana Mazzucato deemed it an essential read for ‘designing a more inclusive and sustainable system with a re-direction of technological change at its centre’.
Azeem Azhar, the author of Exponential Age, is an investor, entrepreneur, podcast host and a technologist who look at technology intersection and collision and its influence on societies and economies. He is also a member of the World Economic Forum’s Global Futures Council.
His podcast guests have included former UK Prime Minister Tony Blair, the bestselling author Yuval Noah Harari, former LinkedIn CEO Reid Hoffman, and the Silicon Valley scientist and entrepreneur Andrew Ng.
In an exclusive interview with Geospatial World, Azeem Azhar discusses his book, the frontier technologies changing the world, the power of geospatial, networks, connectivity, learning curve, and how digitalization impacts the world.
The dots that you have joined across automation, future of work, cybersecurity, climate change, energy transition, drone warfare, all point towards the perils of this acute mismatch between innovation and society, and the way it can precipitate discontentment and upheavals. What do you think can be done for reimagining of how technological impact is assessed, and reframing the discourse around technology?
It’s a really important question. I think that one of the main challenges is that the way these technologies behave is quite often different to the way that previous technologies have behaved.
One of the strongest examples, I think is the positive feedback loops that you get from one iteration of the technology to the next. These positive feedback loops drive down costs extremely fast. This is the learning cycle that I talk about in the book.
Technology has led to an unprecedented decline in price, and all complex engineered technologies are susceptible to learning effects.
The Model T Ford declined in cost by 75% between 1909 and 1929 and that was because of learning effects. They became more and more efficient. There were also economies of scale, which is impressive, but it doesn’t compare with the decline in cost of a computer from 1960s to today. One of the fundamental units in a computer is an engineered product called a transistor. In 1958, a transistor cost about $1,500 in modern money whereas today you can get 100 million transistors for a dollar.
That’s a decline that you don’t get to see in other markets. So, I think one of the first thing is to get these technologies that witness drastic really decline in price which drives their market ubiquity.
Digital technologies and many of the other technologies, even the biological ones and the additive ones, are based around information. They have these network effects around the data themselves. What that means is early advantage can turn into really long standing advantage.
Though, it’s difficult to dislodge some of the players. The dominant operating system today, Microsoft windows, is built on the same basis as MS-DOS from the 1980s. And that’s quite a longstanding, strong, deep position for a business to hold onto. The barrier to entry or market appeal is connected to these network effects which isn’t common in other markets.
Also, the way these technologies behave changes fundamental advantage in industrial structure. So if you look at say the renewables sector, they deliver kilowatt hours of power. They need physical resources, but they don’t need oil and gas.
Those assets suddenly start to look much less useful, and they actually start appearing rather. These are the examples of this dramatic decline in price due to feedback loops, and this new approach where early advantage turns into long lasting advantage because the network effects are different types and the fact that it fundamentally changes what inputs are required. Those are reasons why old models don’t necessarily work.
We need to ask more fundamental questions about what we are actually trying to achieve. I think that frames the problem. I don’t think it necessarily tells you how you then start to reframe the way people think, but if you start to understand those frames, then you can start to think about what those second and third order effects are.
You have identified four key sectors that are undergoing tremendous transformation due to convergence of exponential technologies: energy, manufacturing, computing, and biology. Any reason for not including mobility in it when today, people are talking about it for zero emissions, and for decarbonization and decongestion? Also, there is one very another component at the core of them all: location. What do you think is the impact of location and geospatial technologies on exponential technologies and how do you foresee the future of this?
The way that I look at mobility is that it arises from the core technologies, the underlying technologies. If you look at a modern electric vehicle, one of these $4,000 three seater, reverse trikes that you can now buy in India, what has it got in there? It’s got a lithium ion battery, which is one of the new energy technologies, and it’s got some computing to manage the power train and maybe to manage advanced driver assistance.
So for me, mobility comes from the intersection of the core technology domains. It’s an application in a sense, rather than a core technology. That’s just the way that I think about mobility, and you can look at, do we think of robotics, as in say a robot taxi, is that more to do with computers or is it more to do with new energy platforms?
These things layer up. Then when you talk about geospatial technologies, I mean, again, those end up being critical, but those are again, hybridized technologies. So one of the things that I talk about in the book is what drove a lot of scale in Earth Observation, which is a core geospatial technology.
And those are the same dynamics that are driving these other technologies. It was about modularization and componentization. It was about turning satellites from N of six or eight into N of 2000 and productizing them, so you can basically, through learning curve effects, bring down costs.
Some of the elements of it were that you were able to use standardized components that were coming out of the smartphone dividend, so better CCDs, sensors, processors and improved battery technologies as well.
And so, again, that for me, is partly a compound technology. Now it’s really, really critical because what you in fact see that the cost of geospatial data has radically declined. So we have GPS at the first point, which is essentially a common good, a public good. There are now going to be four operational GNSS networks soon: the European one, the original US GPS, GLONASS, and the Chinese one.
In Earth Observation, the price of securing military grade real-time images has declined tremendously. Commercially, we can today buy what the Americans had at the start of the first Iraq War in 1991 for a few dollars. Space satellite bandwidth has declined dramatically in price, again, from the same dynamics of learning curve, which becomes extremely critical when we turning the physical earth into a digital, actionable dataset.
We are doing the same thing within biology, which is a way more complex data set, where we are able to look at proteins and genes and gene expression and protein interactions, and turn that into data. This is why the price of doing things with protein engineering or cell engineering has declined so dramatically over the last 20 years.
The question is what then happens? And what you start to see is the ubiquity of these technologies and their applicability in lots and lots of different domains.
Within the geospatial space, some of the examples that I find particularly interesting includes competitive intelligence.
For Instance, retailers can count the number of cars in the parking lot of a competitor to see whether their retail levels are high or low. There are applications even for climate mitigation and using real time data to identify urban heat islands, or for better urban planning.
If you have a data model that is purely digital, and you decide you want to tweak how a software agent behaves in this virtual world, you can do that based on the data that you’ve seen. And it’s really easy, because all the parameters are controlled. But of course in the real world, the physical world, that’s quite hard.
Once we have this digital representation, which geospatial data can give us, then we get a robot as it were to interact in the physical world, where there are lots of parameters. Data and anomalies that are not captured by our sensing is a more difficult problem. Then we have to figure out how to close the loop. And I think we’re making progress in closing that loop, and it’s quite interesting that the autonomous car projects that are furthest ahead are those that rely on super high resolution mapping as well as localized sensing.
For example, the Apple car prototypes are really struggling because it turns out that the robots on their own, without the geospatial data, can’t operate. How far does it go? It’s quite interesting to see.
When I got my first computer, we had one camera in the family. That was back in 1981. And when I published my book, we had 59 cameras in the house, and we now probably have 70 because each car has got five cameras in it, every phone has got multiple cameras, actually. But none of that data is joined up ,and none of it is really being analyzed in a sensible way. The idea of sensors and sensor fusion to enrich the world doesn’t yet exist.
And I think that the question is when does that really start to happen? And what does it look like and what infrastructural changes will it require? And then, I think just on the other side, there are really interesting things that we might be able to do with high precision sensing without access to the GPS.
Just think about being in a subway, mall, or an office complex. And as you know, as you walk around, if you just use your inertial measurement systems within phones, after about 10 minutes of walking, your error bounds are huge. But there are real improvements in those technologies where people think that even after an hour, you can be at meter accuracy.
Then there is quantum sensing, which might be able to take that accuracy up a level. It’s quite interesting because then we can map highly accurate Digital Twins, allowing us to manipulate this physical world and manage it with the same degree of precision as that of digital models.
‘If the story of industrial age was one of globalization, the story of the Exponential Age will be of re-localization’, is a rather broad statement. Why do you think mass deployment of technologies such as AI, IoT, VR etc will empower localism, instead of turbocharging globalization, and can it be said to be one of the lasting impacts of the pandemic?
The balance between localization and globalization is about forces of attraction versus forces of repulsion. When you globalize things, you lose a certain degree of control. When you localize them, you lose a certain degree of scale. Then you do things to improve what you get from globalization, which is digitizing supply chains. Then there are highly specialized areas. For instance, a large part of the chip industry is based in Taiwan and some parts of Korea.
When you look at some of the exponential technologies, they allow us to do things locally that we couldn’t have done before.
Renewable power allows us generate power regionally without needing to ship millions of tons of oil every year. Now, you need to move the solar panels the first time, they need to be manufactured somewhere, but it creates a headwind on that particular flow of materials and the same is true with technologies like vertical farming, which can create food stuffs locally, and even for manufacturing.
There are always slightly different transactions at play. In the case of renewables, you end up with much cheaper, cleaner power.
Now, vertical farming is too expensive right now, but it’s getting cheaper due to these learning effects. So the dynamic is that we can establish local production of energy and food and manufacturers, but then there’s a second dynamic, which is that what people learned from COVID.
I talk about this a little bit in the book, even though I’m writing it before COVID. It is the fragility of having single sourced supply chains and the fact that you also don’t have security integrity, because you can’t have visibility of every component manufacturer.
This is a problem the car manufacturers realized. They were dealing with a tier one supplier who had a tier two, tier three, tier four, supplying them. And that supply chain got less and less reliable as you got further down and ultimately, there’s a Shakespearean line, “For the want of a nail, the kingdom was lost”.
It’s also about the national security implications, because the fundamental architectures of our world rely on these types of technologies. And so, you can’t have the things that your economy and your sovereignty depend on being located elsewhere.
There’s a sense of re-localization, but there are two other things that are important, including labor cost arbitrage. A lot of offshoring and globalization was mainly about labor cost arbitrage rather than anything else. This starts to change as poorer countries get richer. And then there is specialization: things that a Taiwanese chip manufacturer or a contract manufacturer in China can do, which cannot be replicated anywhere else in the world.
One reason why cities have won and will continue to win is that within a city, you can develop a very highly specialized economy. You can specialize, you can make a lot of money. And so, specialization benefits from big cities and therefore, you have this economic incentive to go there. And I think that will continue to happen, especially in the parts of the world that are getting richer and richer, India being an example, but Sub-Saharan Africa as well.
Cities will matter, but once you get over that, you look at the story in Europe or the US, where many of the big cities are relatively small in comparison. Our biggest city in the UK is only 8 million people, and London is not going to get much above it in the next 25 years? But how big is Chennai going to be in 25 years?
If someone doesn’t come to London for a high end job, they can live 150 miles away and do it remotely, I don’t know if that dynamic will outweigh the tidal wave of urbanization in emerging markets. The reason you can do that in the UK is because we’ve got 120 years of telephone infrastructure and 70 years of freeway infrastructure and a national grid that’s been there for a really, really long time.
I think the challenge for the scales of emerging economies is that if you’re in a tier four village, you might be 20 years away from reliable broadband that allows you to do that.
I would say that there’s a stage that where cities really, really remain important, even if their relative importance in Europe starts to change, because people can for infrastructural reasons and the ability to work remotely, stay in tier two and tier three areas.
Your take on ‘Moore’s Law’ as a sort of guiding framework and a lodestar for innovation rather than a self-fulfilling prophesy is an interesting one. Miniaturization is an ongoing tech trend, and it is always complemented by fast connectivity and ease of access. What is the relevance today of an axiom that was defined at the beginning of the electronic era?
Its relevance is that people understand it, they believe in it, and they can take it forward in their own head. The problem is that for 15 years, scientists have been saying Moore’s law, that is acceleration through miniaturization of chips, is going to come to an end. The industry has known that, which is why they started to use other approaches to deliver the performance gains.
So in a way, I think that it’s better to take a casual interpretation of Moore’s law, which is that computers are going to get faster and faster every couple of years, because the strategies that are being used to deliver this price performance improvement have changed. And they changed constantly during Moore’s key eras as well.
And what we are starting to see now are strategies like architectural changes within ships using more pipelines, more cores. We are beginning to see 3D chips showing up which wafer on top of each other.
Chip manufacturers are saying we can deliver more performance benefits if we specialize our chips. Instead of being GPUs, have them specialize, just say on Machine Learning or something else.
We are witnessing delivery of things like Cloud, which, although not subject to Moore’s law, delivers infinite amounts of computing power when you need it for short periods of time. Then there’s software enhancements as well.
In fact, the largest improvements in AI performance have come not from the improvements in chip quality, but from algorithmic optimization and tuning.
I think the question for the Exponential Age is, do we expect that process of acceleration of computer performance to continue?
And the answer is yes, because there are different strategies that are in place. We have all got opportunities to work and to deliver and what we’ve actually seen in the data empirically in the last seven or eight years, when Moore’s law ended apparently, according to the most pessimistic scientists is price performance.
In other words, the amount of computation you can buy for 100 dollars has continued to increase 50 to 60% per annum or even higher in certain use cases.
I am happy to use Moore’s law when I talk about it. I am also happy to not use it if it confuses the audience or it’s too controversial.
[ad_2]
Source link