New technologies allow us to gather more data than ever before. However, what use is all this data if decision makers cannot access its meaning? The information revolution is not just about aggregating more information, but also creating better ways of visualizing it. Recent advances make it possible to create virtual models of entire cities and run accurate simulations of future scenarios, giving experts and non-experts a visceral experience of potential catastrophes such as major floods. This could create a powerful drive towards action and support better decision making on the most pressing global challenges – as long as institutional practice embraces the possibilities offered by these new modes of data visualization.
In the movie Amadeus, the composer Salieri reads Mozart’s music off the page and, without hearing a single note played aloud, exclaims “It is miraculous!” Salieri is an expert: he can hear the music in his head with near-perfect fidelity. In many disciplines, as in music, only experts can experience the meaning of symbols directly. The vast majority of us can only have direct experience when our senses are engaged – when the orchestra plays.
Despite the fact that 90% of the global datasphere was created in the last two years, its meaning will remain largely invisible to us until we can set it against backdrops we understand. The term “information revolution” is often used to describe the exponential advancement of technologies that generate, transmit, and store information, but the real revolution will be one of context.
When we think about how to protect civilization from physical catastrophe, it is clear that intuitive representation of data about the physical world allows for better decision making. We need to display data gathered from the world around us in a way that helps us assess and plan for global risks, and better forecast their local impact. This applies both to decision makers and to the public. If governance is to become more inclusive, information must become more egalitarian. Better presenting it is part of this effort.
New technological developments promise a golden age of data visualization. The idea that we can get real-time information about objects in our physical environment is already pervasive. The Internet of Things increasingly monitors the physical world with embedded sensor technology, providing us with information about everything from smart grids and smart homes to biochips and heart monitor implants. But as we use sensors to record information about the objects that surround us, we are creating an absolute flood of data. We must represent that data in a way that is natural for humans to understand – a way that better couples the information to the human nervous system. This will be increasingly important as we analyze data on larger scales.
Institutional practice is still decoupled from the capabilities of technology. We know that good visualization works. But we’re still expecting people – and particularly decision makers – to understand data by reading it off of a page.
Cities are an important focal point when considering approaches to mitigating catastrophic risk. Google and Apple have both done exemplary jobs of capturing and modeling cities at great scale from the air, but not with the street-level accuracy and geometric detail that enables the kind of intuitive data representation we’re talking about.
Very recent advances in technology have made it possible to build virtual models of entire cities with centimetric precision, and these models can generate accurate visualizations of real city-related data. The combination of ultra-high spatial precision and visual resolution gives rise to functionalities that were not possible before. Objects in a virtual city model of this kind – windows, fire hydrants, parking meters, power transformers, vehicles – can have discrete identities and be part of a searchable database. Data from IoT sensors in the real, physical city can now be linked directly to the corresponding objects in the 3D city model. The data can then be visualized in real time, not on a 2D dashboard with numerical and graphical displays, but in the virtual copy of the physical location where the information is actually being generated in the city.
Here’s why this is so powerful for catastrophic risk mitigation: not only can we use high-fidelity models to track and visualize all kinds of human activity, but we can run highly accurate simulations of future scenarios in order to quantify the effects, and visualize them. Using these simulations, experts, non-expert city officials, and city residents alike can get a visceral understanding of what would happen in the case of, say, a major flood caused by rising sea levels.
Anyone who has seen a standard 2D flood zone map knows that they’re far from emotionally riveting. They have legends, symbols, and shadings that require interpretation. And they are static. Unlike real water, nothing moves. This is a tool with which city officials plan for floods, but when we look at one of these, our nervous systems are barely engaged. We’re not driven to act.
But if we immerse ourselves – either with a laptop or in full VR – in a precise 3D model of a city, and run a dynamic simulation of the same flood zone data, our nervous systems kick into gear. As we manually change the water level, we see the water lapping at the doors of individual office buildings and coffee shops in neighborhoods we know. We can observe, at the spatial precision of a few centimeters, whether a bridge will be inaccessible, or whether an unprotected power substation will be submerged. We can see where the best escape routes would be and where we’d need to dispatch first responders and emergency personnel. The technology removes the task of interpretation and replaces it with what comes naturally to human beings: immersion and interaction.
From there, we can imagine a world where all of that information isn’t just shared within a city, but also between and among cities: an expanding global network of virtual cities that exchange information in order to educate, optimize, heal and self-construct, sharing the best practices discovered by each in order to raise the effectiveness of every city as they plan for the future.
However, institutional practice is still decoupled from the capabilities of technology. We know that good visualization works. But we’re still expecting people – and particularly decision makers – to understand data by reading it off of a page. Both the public and representatives of the public need tools that enable them to model and visualize risks in a way that compels them to act.
We’re leaving behind a time when only Salieri-like experts could read the code, and entering a new era in which the visualization of complex data will become a living performance, contoured for our human senses. This will enable all of us to experience meaning more naturally, and inspire us to take actions that will preserve the places – and principles – we value most.
Tasha McCauley is a technology entrepreneur living in Los Angeles. Her current work with GeoSim Systems centers around a new technology that produces high-resolution, fully interactive virtual models of cities. Prior to her work with GeoSim, she co-founded Fellow Robots, a robotics company based in Silicon Valley. She co-directs Ten to the Ninth Plus Foundation, an organization focused on empowering exponential technological change worldwide.